diff --git a/README.md b/README.md
index b4e69fe1..b47acb31 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,19 @@
# API Dash ⚡️
-[](https://bit.ly/heyfoss)
+[](https://discord.com/invite/bBeSdtJ6Ue)
+
+### 🚨🚨 API Dash is participating in GSoC 2025! Check out the details below:
+
+
+
+| | Link |
+|--|--|
+| Learn about GSoC | [Link](https://summerofcode.withgoogle.com) |
+| API Dash GSoC Page | [Link](https://summerofcode.withgoogle.com/programs/2025/organizations/api-dash) |
+| Project Ideas List | [Link](https://github.com/foss42/apidash/discussions/565) |
+| Application Guide | [Link](https://github.com/foss42/apidash/discussions/564) |
+| Discord Channel | [Link](https://discord.com/invite/bBeSdtJ6Ue) |
+
### Please support this initiative by giving this project a Star ⭐️
@@ -270,11 +283,11 @@ You can contribute to API Dash in any or all of the following ways:
- [Ask a question](https://github.com/foss42/apidash/discussions)
- [Submit a bug report](https://github.com/foss42/apidash/issues/new/choose)
- [Request a new feature](https://github.com/foss42/apidash/issues/new/choose)
-- [Choose from our existing list of ideas](https://github.com/foss42/apidash/discussions/112)
+- [Choose from our existing list of ideas](https://github.com/foss42/apidash/discussions/565)
- [Suggest ways to improve the developer experience of an existing feature](https://github.com/foss42/apidash/issues/new/choose)
- Add documentation
- To add a new feature, resolve an existing issue or add a new test to the project, check out our [Contribution Guidelines](CONTRIBUTING.md).
## Need Any Help?
-In case you need any help with API Dash or are encountering any issue while running the tool, please feel free to drop by our [Discord server](https://bit.ly/heyfoss) and we can have a chat in the **#foss-apidash** channel.
+In case you need any help with API Dash or are encountering any issue while running the tool, please feel free to drop by our [Discord server](https://discord.com/invite/bBeSdtJ6Ue) and we can have a chat in the **#foss-apidash** channel.
diff --git a/doc/dev_guide/api_endpoints_for_testing.md b/doc/dev_guide/api_endpoints_for_testing.md
index 0d7cc7a1..e009dfdb 100644
--- a/doc/dev_guide/api_endpoints_for_testing.md
+++ b/doc/dev_guide/api_endpoints_for_testing.md
@@ -15,3 +15,28 @@ A List of API endpoints that can be used for testing API Dash
#### For Testing sites with Bad Certificate
- https://badssl.com/
- https://www.ssl.com/sample-valid-revoked-and-expired-ssl-tls-certificates/
+
+#### PDF
+
+- https://training.github.com/downloads/github-git-cheat-sheet.pdf
+
+#### Text
+
+- https://www.google.com/robots.txt
+
+#### JSON
+
+- https://api.apidash.dev/openapi.json
+
+#### XML
+
+- https://apidash.dev/sitemap.xml
+
+#### Video
+
+- https://download.blender.org/peach/bigbuckbunny_movies/
+- https://flutter.github.io/assets-for-api-docs/assets/videos/bee.mp4
+
+#### Audio
+
+-
diff --git a/doc/dev_guide/packaging.md b/doc/dev_guide/packaging.md
index 59fc1fcf..8b637167 100644
--- a/doc/dev_guide/packaging.md
+++ b/doc/dev_guide/packaging.md
@@ -78,7 +78,84 @@ git push
## FlatHub (Flatpak)
-TODO Instructions
+Steps to generate .flatpak package of API Dash:
+
+1. Clone and build API Dash:
+
+Follow the [How to run API Dash locally](setup_run.md) guide.
+
+Stay in the root folder of the project directory.
+
+2. Install Required Packages (Debian/Ubuntu):
+
+```bash
+sudo apt install flatpak
+flatpak install -y flathub org.flatpak.Builder
+flatpak remote-add --if-not-exists --user flathub https://dl.flathub.org/repo/flathub.flatpakrepo
+```
+
+*if using another linux distro, download flatpak and follow the rest of the steps.
+
+3. Build API Dash project:
+
+```bash
+flutter build linux --release
+```
+
+4. Create flatpak manifest file:
+
+```bash
+touch apidash-flatpak.yaml
+```
+in this file, add:
+
+```yaml
+app-id: io.github.foss42.apidash
+runtime: org.freedesktop.Platform
+runtime-version: "23.08"
+sdk: org.freedesktop.Sdk
+
+command: /app/bundle/apidash
+finish-args:
+ - --share=ipc
+ - --socket=fallback-x11
+ - --socket=wayland
+ - --device=dri
+ - --socket=pulseaudio
+ - --share=network
+ - --filesystem=home
+modules:
+ - name: apidash
+ buildsystem: simple
+ build-commands:
+ - cp -a build/linux/x64/release/bundle /app/bundle
+ sources:
+ - type: dir
+ path: .
+```
+
+5. Create the .flatpak file:
+
+```bash
+flatpak run org.flatpak.Builder --force-clean --sandbox --user --install --install-deps-from=flathub --ccache --mirror-screenshots-url=https://dl.flathub.org/media/ --repo=repo builddir apidash-flatpak.yaml
+
+flatpak build-bundle repo apidash.flatpak io.github.foss42.apidash
+```
+
+The apidash.flatpak file should be the project root folder.
+
+To test it:
+
+```bash
+flatpak install --user apidash.flatpak
+
+flatpak run io.github.foss42.apidash
+```
+To uninstall it:
+
+```bash
+flatpak uninstall io.github.foss42.apidash
+```
## Homebrew
diff --git a/doc/proposals/2025/gsoc/Application_Sohier Lotfy_AI UI Designer for APIs.md b/doc/proposals/2025/gsoc/Application_Sohier Lotfy_AI UI Designer for APIs.md
new file mode 100644
index 00000000..b2379b16
--- /dev/null
+++ b/doc/proposals/2025/gsoc/Application_Sohier Lotfy_AI UI Designer for APIs.md
@@ -0,0 +1,78 @@
+### About
+
+1. Full Name: Sohier Lotfy Elsafty
+2. Email: sohier.lotfy.els@hotmail.com
+3. Phone: +971526738751
+4. Discord handle: soh0869
+5. Website: https://sohier-lotfy.webflow.io/
+6. GitHub: https://github.com/soh-123
+7. Twitter: https://x.com/Namlah__0
+8. LinkedIn: https://www.linkedin.com/in/sohierlotfy/
+9. Time zone: GMT+4
+10. Resume: https://drive.google.com/file/d/1iwzkVZ8W-oqTnLd-lvwuXo7zOdSBsYMI/view?usp=sharing
+
+### University Info
+
+1. University name: Georgia Tech Institute
+2. Program you are enrolled in (Degree & Major/Minor): Masters of computer science
+3. Year: 2024
+5. Expected graduation date: 2028
+
+### Motivation & Past Experience
+
+Short answers to the following questions (Add relevant links wherever you can):
+1. Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?
+ NO
+2. What is your one project/achievement that you are most proud of? Why?
+ Audiobook generator, although it's non-professional project but it was an idea that came true and I thought of every single piece of it, and built it for myself as me the only user.
+3. What kind of problems or challenges motivate you the most to solve them?
+ the problems that requires creative thinking and the problems that once dug deep and found the core problem you can find the solution easily.
+4. Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?
+ No, I can work part-time as I am a mom and part time master student.
+6. Do you mind regularly syncing up with the project mentors?
+ NO
+7. What interests you the most about API Dash?
+ API Dash looks interesting because it's an open-source, lightweight alternative to Postman and Insomnia, with cross-platform support (including mobile). One standout feature is its ability to preview API responses for images, PDFs, audio, and videos, which most API clients don’t support. It also supports GraphQL, SSE, and WebSockets, making it versatile beyond just REST APIs. Another useful feature is its built-in code generation for multiple languages like Dart, Python, and JavaScript, which speeds up API integration. Since it's built with Flutter, it has a native feel across platforms and is highly customizable.
+8. Can you mention some areas where the project can be improved?
+ - The UI is lightweight, but when handling many APIs, navigation and search could be improved with tagging, categorization, or hierarchical collections.
+ - It supports some languages, but expanding code snippet generation to include Swift, Kotlin, C#, and Rust would be useful.
+ - While it has a clean UI, offering more customization, such as dark mode, font adjustments, and layout options, would enhance usability.
+
+### Project Proposal Information
+
+### **1. Proposal Title**
+**AI UI Designer for APIs – Automated Visualization of API Responses**
+
+### **2. Abstract**
+APIs return structured data in formats like JSON or XML, but developers often spend time manually creating UI components to visualize and interact with this data. This project aims to develop an AI-powered agent that automatically transforms API responses into dynamic, user-friendly UI components. The system will analyze API response structures and generate appropriate UI elements, such as tables, charts, forms, and cards, with customization options. This eliminates the need for manual UI development, improving efficiency and user experience. The generated UI components will be exportable for integration into Flutter or web applications.
+
+### **3. Detailed Description**
+#### **Problem Statement**
+When integrating APIs into applications, developers often need to create custom UI elements to present API responses in an understandable way. This process is time-consuming, repetitive, and requires both front-end and back-end expertise. Automating this step will streamline API consumption and enhance usability.
+
+#### **Proposed Solution**
+This project will develop an AI-driven system that:
+- Parses API responses (JSON, XML) to extract key data structures.
+- Automatically generates corresponding UI components (tables, charts, forms, cards).
+- Provides customization options for developers to modify layouts, styles, and interactive elements (filters, pagination, sorting).
+- Enables real-time UI updates based on API response variations.
+- Allows easy export of the generated UI components for use in Flutter or web applications.
+
+#### **Technical Approach**
+- **Data Parsing & Analysis**: Implement parsers to process API responses and extract hierarchical data.
+- **AI Model for UI Generation**: Use rule-based mapping and AI models to determine the best UI components for different response structures.
+- **Dynamic UI Rendering**: Implement a rendering engine in Flutter/Dart to visualize API data interactively.
+- **Customization & Exporting**: Provide UI customization options and an export function for Flutter/web integration.
+
+### **4. Weekly Timeline**
+
+| **Week** | **Tasks** |
+|----------|----------|
+| **Week 1** | Research similar solutions, finalize technical stack, and define core features. |
+| **Week 2** | Implement API response parsers for JSON & XML. Test data extraction on sample APIs. |
+| **Week 3** | Develop AI-based UI mapping logic for different response types (tables, charts, forms, etc.). |
+| **Week 4** | Build dynamic UI rendering components in Flutter. Test auto-generation of UI components. |
+| **Week 5** | Add customization options for developers (styles, filters, sorting). Refine UI/UX. |
+| **Week 6** | Implement export functionality for generated UI. Conduct performance testing. |
+| **Week 7** | Conduct extensive testing on various APIs, fix issues, and optimize performance. |
+| **Week 8** | Finalize documentation, prepare demo, and submit the completed project. |
diff --git a/doc/proposals/2025/gsoc/application_debasmibasu_aiagentforapitesting.md b/doc/proposals/2025/gsoc/application_debasmibasu_aiagentforapitesting.md
new file mode 100644
index 00000000..5c779ac2
--- /dev/null
+++ b/doc/proposals/2025/gsoc/application_debasmibasu_aiagentforapitesting.md
@@ -0,0 +1,83 @@
+# AI-Powered API Testing and Tool Integration
+
+## Personal Information
+
+- **Full Name:** Debasmi Basu
+- **Email:** [basudebasmi2006@gmail.com](mailto:basudebasmi2006@gmail.com)
+- **Phone:** +91 7439640610
+- **Discord Handle:** debasmibasu
+- **Home Page:** [Portfolio](https://debasmi.github.io/portfolio/portfolio.html)
+- **GitHub Profile:** [Debasmi](https://github.com/debasmi)
+- **Socials:**
+ - [LinkedIn](https://www.linkedin.com/in/debasmi-basu-513726288/)
+- **Time Zone:** Indian Standard Time
+- **Resume:** [Google Drive Link](https://drive.google.com/file/d/1o5JxOwneK-jv2GxnKTrzk__n7UbSKTPt/view?usp=sharing)
+
+## University Info
+
+- **University Name:** Cluster Innovation Centre, University of Delhi
+- **Program:** B.Tech. in Information Technology and Mathematical Innovations
+- **Year:** 2023 - Present
+- **Expected Graduation Date:** 2027
+
+## Motivation & Past Experience
+
+### Project of Pride: Image Encryption using Quantum Computing Algorithms
+
+This project represents my most significant achievement in the field of quantum computing and cybersecurity. I developed a **quantum image encryption algorithm** using **Qiskit**, leveraging quantum superposition and entanglement to enhance security. By implementing the **NEQR model**, I ensured **100% accuracy in encryption**, preventing any data loss. Additionally, I designed **advanced quantum circuit techniques** to reduce potential decryption vulnerabilities, pushing the boundaries of modern encryption methods.
+
+This project is my pride because it merges **cutting-edge quantum computing** with **practical data security applications**, demonstrating the **real-world potential of quantum algorithms in cryptography**. It reflects my deep technical expertise in **Qiskit, Python, and quantum circuits**, as well as my passion for exploring **future-proof encryption solutions**.
+
+### Challenges that Motivate Me
+
+I am driven by challenges that push the boundaries of **emerging technologies, security, and web development**. The intersection of **AI, cybersecurity, web applications, and quantum computing** excites me because of its potential to redefine **secure digital interactions**. My passion lies in building **robust, AI-powered automation systems** that enhance **security, efficiency, and accessibility** in real-world applications. Additionally, I enjoy working on **scalable web solutions**, ensuring that modern applications remain secure and user-friendly.
+
+### Availability for GSoC
+
+- **Will work full-time on GSoC.**
+- I will also dedicate time to exploring **LLM-based security frameworks**, improving **web API integration**, and enhancing my expertise in **AI-driven automation**.
+
+### Regular Sync-Ups
+
+- **Yes.** I am committed to maintaining **regular sync-ups** with mentors to ensure steady project progress and discuss improvements in API security and automation.
+
+### Interest in API Dash
+
+- The potential to integrate **AI-powered automation** for API testing aligns perfectly with my expertise in **web development, backend integration, and security automation**.
+- I see a great opportunity in **enhancing API security validation** using AI-driven techniques, ensuring robust **schema validation and intelligent error detection**.
+
+### Areas for Improvement
+
+- API Dash can expand **real-time collaborative testing features**, allowing teams to test and debug APIs more efficiently.
+- Enhancing **security automation** by integrating **AI-powered API monitoring** would significantly improve API Dash’s effectiveness.
+
+---
+
+## Project Proposal
+
+### **Title**
+
+AI-Powered API Testing and Tool Integration
+
+### **Abstract**
+
+API testing often requires **manual test case creation and validation**, making it inefficient. Additionally, **converting APIs into structured definitions for AI integration** is a complex task. This project aims to **automate test generation, response validation, and structured API conversion** using **LLMs and AI agents.** The system will provide **automated debugging insights** and integrate seamlessly with **pydantic-ai** and **langgraph.** A **benchmarking dataset** will also be created to evaluate various LLMs for API testing tasks.
+
+### **Weekly Timeline**
+
+| Week | Focus | Key Deliverables & Achievements |
+|---------------|--------------------------------|------------------------------------------------------------------------|
+| **Week 1-2** | Research & Architecture | Study existing API testing tools, research AI automation methods, explore web-based API testing interfaces, and define the project architecture. Expected Outcome: Clear technical roadmap for implementation. |
+| **Week 3-4** | API Specification Parsing | Develop a parser to extract API endpoints, request methods, authentication requirements, and response formats from OpenAPI specs, Postman collections, and raw API logs. Expected Outcome: Functional API parser capable of structured data extraction and visualization. |
+| **Week 5-6** | AI-Based Test Case Generation | Implement an AI model that analyzes API specifications and generates valid test cases, including edge cases and error scenarios. Expected Outcome: Automated test case generation covering standard, edge, and security cases, integrated into a web-based UI. |
+| **Week 7-8** | Response Validation & Debugging | Develop an AI-powered validation mechanism that checks API responses against expected schemas and detects inconsistencies. Implement logging and debugging tools within a web dashboard to provide insights into API failures. Expected Outcome: AI-driven validation tool with intelligent debugging support. |
+| **Week 9-10** | Structured API Conversion | Design a system that converts APIs into structured tool definitions compatible with pydantic-ai and langgraph, ensuring seamless AI agent integration. Expected Outcome: Automated conversion of API specs into structured tool definitions, with visual representation in a web-based interface. |
+| **Week 11-12**| Benchmarking & Evaluation | Create a dataset and evaluation framework to benchmark different LLMs for API testing performance. Conduct performance testing on generated test cases and validation mechanisms. Expected Outcome: A benchmarking dataset and comparative analysis of LLMs in API testing tasks, integrated into a web-based reporting system. |
+| **Final Week**| Testing & Documentation | Perform comprehensive end-to-end testing, finalize documentation, create usage guides, and submit the final project report. Expected Outcome: Fully tested, documented, and ready-to-use AI-powered API testing framework, with a web-based dashboard for interaction and reporting. |
+
+---
+
+## Conclusion
+
+This project will significantly **enhance API testing automation** by leveraging **AI-driven test generation, web-based API analysis, and structured tool conversion**. The benchmarking dataset will provide **a standard evaluation framework** for API testing LLMs, ensuring **optimal model selection for API validation**. The resulting **AI-powered API testing framework** will improve **efficiency, accuracy, security, and scalability**, making API Dash a more powerful tool for developers.
+
diff --git a/doc/proposals/2025/gsoc/application_papa_kofi_api_auth.md b/doc/proposals/2025/gsoc/application_papa_kofi_api_auth.md
new file mode 100644
index 00000000..0afa537c
--- /dev/null
+++ b/doc/proposals/2025/gsoc/application_papa_kofi_api_auth.md
@@ -0,0 +1,322 @@
+
+### About
+
+1. Full Name
+- Papa Kofi Boahen
+3. Contact info (email, phone, etc.)
+- Email: papakofiboahen@gmail.com
+- Phone: +233538966851
+6. Discord handle
+- .papakofi
+7. Home page (if any)
+- [Link to homepage](https://devportfolio-sepia-eight.vercel.app/)
+9. GitHub profile link
+- [GitHub Profile](https://github.com/StormGear)
+10. Twitter, LinkedIn, other socials
+- [X](https://x.com/kofiishere)
+- [LinkedIn](https://www.linkedin.com/in/papakofiboahen)
+- [DevPost](https://devpost.com/papakofiboahen)
+11. Time zone
+- GMT+0 Abidjan/Accra
+12. Link to a resume (PDF, publicly accessible via link and not behind any login-wall)
+- [Link to Resume](https://drive.google.com/drive/folders/1C3aLqlWrBX4TVh9E3YesPQuqb_aE5lw6?usp=sharing)
+
+### University Info
+
+1. University name
+- Academic City University - [School Website](https://acity.edu.gh/)
+2. Program you are enrolled in (Degree & Major/Minor)
+- BSc Computer Engineering Minor: Telecommunications
+3. Year
+- 2022
+5. Expected graduation date
+- June 2025
+
+### Motivation & Past Experience
+
+Short answers to the following questions (Add relevant links wherever you can):
+1. Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?
+- I have been gradually learning and introducing myself to open source software development.
+- I have started out contributing to the APIDash project which I found quite interesting due to the fact that is was been written in Dart.
+- Currently, I have been able to get two PRs merged. One of the PRs was a contribution to the documentation on the installation steps for macOS users. Here is a link to the merged [PR](https://github.com/foss42/apidash/pull/521)
+- The other was a contribution towards adding insomnia importer to the APIDash project. This contribution led to the closing of a **High priority** issue of the repo. Here is a link to the merged [PR](xhttps://github.com/foss42/apidash/pull/525)
+##### Other ways I have contributed to the Open Source Community
+- As the Google for Developers on Campus Club Lead, I held two events introducing students to open source software. The first event which I led myself, teaching students basic Git and GitHub concepts. The second “Introduction to Open Source” event, I partnered with other universities within my country where we invited professional software engineers and open source maintainers to speak on open source development.
+- I have given technical talks within the developer community. Notable amongst these is my talk at the Google DevFest, Accra and Google I/O Extended events. At DevFest, I gave a talk on how to automate mobile app deployments to App Stores. Here is a link to the [Youtube Live](https://www.youtube.com/watch?v=DDFoWo0YO-k&t=6332s) event.
+
+2. What is your one project/achievement that you are most proud of? Why?
+- In my country alone, 12,710 tons of municipal solid waste is generated daily, however only about 10% of this is collected and disposed of properly. We have set out to build a digital platform that enables users to order for and schedule trash takeouts seamlessly. We are still in the prototyping phase and are working hard to build and to acquire investor support. My role includes developing mobile applications and occasionally working on the backend. I also have business and entrepreneurship roles. We have participated in entrepreneurship training and have been able to secure some funding to kickstart the project. The app is currently released on PlayStore and AppStore would be launched to end users soon. This has potential to address a pressing challenge within my country and beyond.
+
+3. What kind of problems or challenges motivate you the most to solve them?
+- Socio-Economic challenges motivate me to address some of them within my capacity. Coming from a third-world country, there are several socio-economic challenges that need to be addressed. I strongly believe that technology is instrumental in addressing some of our most pressing needs. It is also important to equip engineers in order to enhance their ability to tackle these issues.
+4. Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?
+- I can and would make arrangements to work on GSoC full-time. Continuous learning is imperative for any developer. I will be enhancing my skillset during GSoC as well as learning more about distributed systems and artificial intelligence.
+6. Do you mind regularly syncing up with the project mentors?
+- I will be available to sync up regularly with the project mentors for needed guidance and to speak on how the project is progressing.
+
+7. What interests you the most about API Dash?
+- API Dash is a promising API client, being built using Flutter really defines what's possible with Flutter. It is promising for both the Flutter and the Developer community. The promise of incorporating AI in future releases is also very exciting. The responsiveness of API Dash maintainers is also really good.
+
+8. Can you mention some areas where the project can be improved?
+- API Dash is on a good course. I have seen the **Roadmap** as well as other interesting ideas in the pipeline for GSoC. I would like to add that API Dash could have a web client and support live collaboration amongst teams.
+
+### Project Proposal Information
+
+1. Proposal Title
+- Adding Support for API Authentication Methods and API Dash Feature Improvements
+2. Abstract: A brief summary about the problem that you will be tackling & how.
+- Authentication Methods Overview
+1. Basic Authentication
+
+Simple username/password transmission
+Credentials encoded in Base64
+Sent via HTTP Authorization header.
+
+2. API Key Authentication
+
+Single token for identifying application/user
+Can be sent via Request headers or
+Query parameters
+
+3. Bearer Token Authentication
+
+Uses access tokens for authorizationypically JWT-based
+Stateless authentication mechanism.
+
+4. JWT Bearer Token
+
+Self-contained authentication token contains encoded user claims
+Cryptographically signed which also supports token expiration and validation
+
+5. Digest Authentication
+
+Challenge-response authentication protocol which
+prevents sending plain-text credentials.
+
+
+6. OAuth 1.0
+
+This is complex requiring multiple request-response cycles
+which provides secure delegated access.
+
+7. OAuth 2.0
+
+Supports various grant types allowomg third-party service authorization such as signing in with Google, Apple or Facebook.
+
+Along with these, I intend on working on some of the API Dash Feature Improvements such as
+- Adding support for more content types in request
+- Importing from/Exporting to OpenAPI/Swagger specification
+- JSON body syntax highlighting, beautification, validation
+
+3. Detailed Description
+
+### Implementation Phases
+1. Initial Assessment
+
+- Read technical documentations to gain insights on implementing these authentication strategies securely
+
+2. Security Hardening
+
+Implement HTTPS
+Using secure token transmission
+Add additional encryption layers
+Implement robust error handling
+
+3. Validation and Testing
+
+- Unit testing authentication flows
+- Simulating various authentication scenarios
+
+- Implementation of theses would require the use of some of these packages at least
+*http:* For making HTTP requests
+*dart_jsonwebtoken:* JWT token handling
+*flutter_secure_storage:* Secure token storage
+*oauth1:* OAuth 1.0 implementation
+*crypto:* Cryptographic operations
+
+1. Basic Authentication
+- For this implementation, I would essential create a class with a simple method
+```dart
+Future makeBasicAuthRequest(
+ String url,
+ String username,
+ String password
+ )
+```
+-- This method accepts the url for the request as well as the password needed for the request.
+-- The username:password combination is then encoded to base64.
+-- This encoding would further be attached and sent as an `Authorization` Request in the HTTP headers like so `Basic $credentials` with $credentials representing the base64 encoded value.
+
+2. API Key Authentication
+
+```dart
+ Future makeRequestWithHeaderApiKey(
+ String url,
+ String apiKey
+ )
+```
+- apiKey is supplied via the `X-API-Key` HTTP header
+
+```dart
+ Future makeRequestWithQueryApiKey(
+ String url,
+ String apiKey
+ )
+```
+- In the method, the apikey is supplied via a query parameter like so:
+`'$url?api_key=$apiKey'`.
+- This offers a different approach of including api key for a request.
+
+3. Bearer Token Authentication
+```dart
+ Future makeRequestWithBearerToken(
+ String url,
+ String token
+ )
+```
+- token is supplied via the `Authorization` HTTP header like so `Bearer $token`
+4. JWT Bearer
+- A class with methods for generating JWT and verifying JWT
+```dart
+ String generateJwt(String userId)
+```
+- Using the userId, or some alternative parameter, the [dart_jsonwebtoken](https://pub.dev/packages/dart_jsonwebtoken) can be used to generate and sign JWT
+- After generation and signing, the JWT has to be verified. The docs for `dart_jsonwebtoken` provides a good overview of how that could be achieved
+
+```dart
+try {
+ // Verify a token (SecretKey for HMAC & PublicKey for all the others)
+ final jwt = JWT.verify(token, SecretKey('secret passphrase'));
+
+ print('Payload: ${jwt.payload}');
+} on JWTExpiredException {
+ print('jwt expired');
+} on JWTException catch (ex) {
+ print(ex.message); // ex: invalid signature
+}
+```
+
+5. Digest Authentication
+- Digest Auth provides more secure form of authentication. We could make use of the [crypto](https://pub.dev/packages/crypto) dart package, which provides several implementations for hashing algorithms. This authentication mechanism also prevents sending plain-text credentials.
+An example of creating a hash is shown below:
+```dart
+ final ha1 = md5.convert(
+ utf8.encode('$username:$password')
+ ).toString();
+```
+
+6. OAuth 1.0
+- This Dart library contains key implementations necessary for implementing authentication with OAuth 1.0 [OAuth1](https://pub.dev/packages/oauth1). Some key steps in using OAuth1 is as follows:
+```dart
+ // Get temporary credentials
+ final tempCredentials = await authorization.requestTemporaryCredentials(
+ 'https://callback-url.com'
+ );
+
+// Redirect user to authorization page
+final authorizationUrl = authorization.getResourceOwnerAuthorizationUrl(
+ tempCredentials
+);
+
+// After user authorization, get token credentials
+final tokenCredentials = await authorization.requestTokenCredentials(
+ tempCredentials,
+ verifier
+);
+```
+where `authorization` is an instance of the Authorization class which is created with client credentials and platform definition (Platforms include X, Apple, Google etc) provided by the aforementioned library.
+It is important to note however that the OAuth1.0 is being deprecated in favor of the 2.0 framework. Learn more [oauth1](https://oauth.net/core/1.0/)
+
+
+
+7. OAuth 2.0
+- This is a modern authorization framework. Here is the docs for using the framework, [Oauth 2.0](https://oauth.net/2/) A typical class implementing OAuth would be as shown
+```dart
+class OAuth2Service {
+ final String clientId;
+ final String clientSecret;
+ final String redirectUri;
+ final String authorizationEndpoint;
+ final String tokenEndpoint;
+
+ OAuth2Service({
+ required this.clientId,
+ required this.clientSecret,
+ required this.redirectUri,
+ required this.authorizationEndpoint,
+ required this.tokenEndpoint
+ });
+
+ // Generate authorization URL
+ String getAuthorizationUrl() {
+ return '$authorizationEndpoint?'
+ 'client_id=$clientId&'
+ 'redirect_uri=$redirectUri&'
+ 'response_type=code&'
+ 'scope=profile';
+ }
+}
+```
+`clientId`: A unique identifier issued to your application when you register it with the OAuth provider.
+`clientSecret`: A confidential secret known only to your application and the authorization server
+`redirectUri`: The URL where the authorization server redirects the user after they approve/deny the authorization request
+`authorizationEndpoint`: The URL at the authorization server where users are redirected to begin the OAuth flow.
+`tokenEndpoint`: The URL at the authorization server used to exchange authorization codes for access tokens.
+
+- Most these implementations would be made using Classes, unit and integration tests would be made for these classes and their methods. Integration testing would also be considered where appropriate.
+
+
+
+4. Weekly Timeline: A rough week-wise timeline of activities that you would undertake.
+
+- A week-wise timeline is shown below
+
+| Week | Primary Focus | Key Activities | Deliverables/Outcomes |
+|------|--------------|----------------|----------------------|
+| Week 1 | Project Setup & Initial Research | - Review technical documentation for authentication strategies | - Comprehensive research report |
+| | | - Environment setup and dependency configuration | - Development environment setup |
+| Week 2 | Basic Authentication | - Implement `makeBasicAuthRequest` method | - Writing Basic Authentication class |
+| | | - Create secure base64 encoding functionality | - Unit tests for Basic Authentication |
+| | | - Add HTTPS implementation and security layers | - Basic security documentation |
+| Week 3 | API Key Authentication | - Implement header-based API key authentication | - API Key Authentication class |
+| | | - Implement query parameter-based API key authentication | - Writing API key authentication methods |
+| | | - Create secure storage mechanism for API keys | - Unit tests for API key methods |
+| Week 4 | Bearer Token Authentication | - Implement `makeRequestWithBearerToken` method | - Bearer Token Authentication class |
+| | | - Create token validation and verification | - Unit tests for token validation |
+| | | - Implement secure token storage | - Token security documentation |
+| Week 5 | JWT Implementation (Part 1) | - Set up `dart_jsonwebtoken` integration | - JWT generation functionality |
+| | | - Implement JWT generation method | - JWT verification method |
+| | | - Create token expiration handling | - Initial JWT testing framework |
+| Week 6 | JWT Implementation (Part 2) | - Implement error handling for JWT validation | - Complete JWT Authentication class |
+| | | - Add refresh token functionality | - Unit tests for JWT functionality |
+| | | - Create comprehensive JWT documentation | - JWT implementation documentation |
+| Week 7-8 | Digest Authentication | - Implement Crypto package integration | - Working Digest Authentication class |
+| | | - Create secure hash generation for credentials | - Hash verification functionality |
+| | | - Implement nonce and challenge handling | - Unit and integration tests |
+| Week 9 | OAuth 1.0 Implementation | - Set up OAuth1 package integration | - OAuth 1.0 implementation class |
+| | | - Implement temporary credentials request flow | - Authorization URL generation |
+| | | - Create token credentials handling | - OAuth 1.0 testing framework |
+| Week 10-11 | OAuth 2.0 Implementation | - Create OAuth2Service class | - Complete OAuth 2.0 implementation |
+| | | - Implement authorization URL generation | - Token endpoint integration |
+| | | - Add client ID/secret management | - OAuth 2.0 flow documentation |
+| Week 12 | Security Hardening | - Implement additional encryption layers | - Security audit report |
+| | | - Add robust error handling across all authentication methods | - Updated security documentation |
+| | | - Create secure token transmission mechanisms | - Security hardening test suite |
+| Week 13 | Integration Testing | - Create integration tests for all authentication flows | - Integration testing framework |
+| | | - Simulate various authentication scenarios | - Test coverage report |
+| | | - Fix issues discovered during testing | - Integration test documentation |
+| Week 14 | Performance Optimization | - Analyze authentication performance | - Performance optimization report |
+| | | - Implement caching mechanisms where appropriate | - Updated authentication classes |
+| | | - Optimize token refresh procedures | - Performance test results |
+| Week 15 | Documentation & Project Closure | - Create comprehensive API documentation | - Complete API documentation |
+| | | - Develop usage examples and guides | - Implementation examples |
+| | | - Finalize project and prepare for deployment | - Final project delivery report |
+
+In summary this is an overview of the weekly timelines:
+*Weeks 1-4:* Setup, research, and implementation of simpler authentication methods (Basic Auth, API Key, Bearer Token)
+*Weeks 5-8:* Implementation of more complex authentication systems (JWT, Digest Authentication)
+*Weeks 9-11:* OAuth implementations (both 1.0 and 2.0)
+*Weeks 12-15:* Security hardening, testing, optimization, and documentation
+
+
+
diff --git a/doc/proposals/2025/gsoc/idea_Akshay_Waghmare_ai-agent-testing.md b/doc/proposals/2025/gsoc/idea_Akshay_Waghmare_ai-agent-testing.md
new file mode 100644
index 00000000..e72142f6
--- /dev/null
+++ b/doc/proposals/2025/gsoc/idea_Akshay_Waghmare_ai-agent-testing.md
@@ -0,0 +1,309 @@
+# AI Agent for API Testing and Automated Tool Integration
+
+## Personal Information
+- **Full Name:** Akshay Waghmare
+- **University Name:** Indian Institute of Information Technology, Allahabad (IIIT Allahabad)
+- **Program Enrolled In:** B.Tech in Electronics and Communication Engineering (ECE)
+- **Year:** Pre-final Year (Third Year)
+- **Expected Graduation Date:** May 2026
+
+## About Me
+I’m Akshay Waghmare, a pre-final year B.Tech student at IIIT Allahabad, majoring in Electronics and Communication Engineering. With a strong foundation in full-stack development and backend architecture, I have hands-on experience in technologies like **Next.js**, **Node.js**, **Spring Boot**, **Kafka**, **RabbitMQ**, and **Flutter**. I’ve interned at **Screenera.ai** and **Webneco Infotech**, working on building scalable, high-performance applications. My open-source contributions span organizations like **Wikimedia Foundation**, **C2SI**, and **OpenClimateFix**, and I’ve mentored aspiring developers at **OpenCode IIIT Allahabad**. I’ve also participated in several competitions, achieving **AIR 12** in the **Amazon ML Challenge**, **Goldman Sachs India Hackathon (National Finalist)**, and **Google GenAI Hackathon**. I’m passionate about AI, cloud technologies, and innovative software solutions, especially in automating tasks with AI agents and leveraging **Large Language Models (LLMs)** for smarter workflows.
+## Project Details
+- **Project Title:** AI Agent for API Testing and Automated Tool Integration
+- **Description:**
+ This project leverages Large Language Models (LLMs) to automate API testing by generating intelligent test cases, validating responses, and converting APIs into structured tool definitions for seamless integration with AI agent frameworks like **crewAI, smolagents, pydantic-ai, and langgraph**.
+
+- **Key Features:**
+ - Automated API discovery and structured parsing from OpenAPI specs, Postman collections, and raw API calls.
+ - AI-powered test case generation, including edge cases and security testing.
+ - Automated API request execution and intelligent validation using machine learning.
+ - Seamless tool integration with AI frameworks for advanced automation.
+ - Benchmark dataset & evaluation framework for selecting the best LLM backend for end users.
+
+
+# Proposed Idea : AI Agents for API Testing & Tool Definition Generator
+
+I propose a approach leveraging Large Language Models to utilise both API testing and framework integration. My solution combines intelligent test generation with automated tool definition creation, all powered by contextually-aware AI.
+
+The core of my approach is a unified pipeline that first parses and understands API specifications at a deep semantic level, then uses that understanding for two key purposes: generating comprehensive test suites and creating framework-specific tool definitions. This dual-purpose system will dramatically reduce the manual effort typically required for both tasks while improving quality and coverage.
+
+For the API testing component, We will focus on areas where traditional testing tools fall short - particularly intelligent edge case detection and business logic validation. By leveraging LLMs' ability to reason about APIs contextually, the system will identify potential issues that rule-based generators miss. The test generation will cover functional testing with parameter variations, edge cases including boundary values and invalid inputs, security testing for authentication and injection vulnerabilities, and even performance testing scenarios.
+
+For the framework integration component, We will then develop a flexible adapter system that generates properly typed tool definitions with appropriate validation rules for each target framework. This means developers can instantly convert their APIs into tool definitions for crewAI, langchain, pydantic-ai, langgraph, and other frameworks without manually rewriting specifications and validation logic.
+
+To address the benchmarking requirement in the project description, After that we can create a standardized dataset of diverse API specifications and implement a comprehensive evaluation framework. This will measure multiple dimensions including accuracy of generated tests and tools, API coverage percentage, relevance to the API's purpose, edge case detection ability, and cost efficiency across different LLM providers. This will enable users to make informed decisions about which model best fits their specific needs.
+
+## System Architecture
+
+The system architecture consists of several key components working together to form a pipeline:
+
+```mermaid
+flowchart TD
+ subgraph Client["Client Layer"]
+ Web[Web Interface]
+ CLI[Command Line Interface]
+ SDK[SDK/API Client]
+ end
+
+ subgraph Gateway["API Gateway"]
+ GW[API Gateway/Load Balancer]
+ Auth[Authentication Service]
+ end
+
+ subgraph Core["Core Services"]
+ subgraph APIAnalysis["API Analysis Service"]
+ Parser[API Specification Parser]
+ Analyzer[Endpoint Analyzer]
+ DependencyDetector[Dependency Detector]
+ end
+
+ subgraph TestGen["Test Generation Service"]
+ TestCaseGen[Test Case Generator]
+ TestDataGen[Test Data Generator]
+ TestSuiteOrg[Test Suite Organizer]
+ EdgeCaseGen[Edge Case Generator]
+ end
+
+ subgraph ToolGen["Tool Generation Service"]
+ ToolDefGen[Tool Definition Generator]
+ SchemaGen[Schema Generator]
+ FrameworkAdapter[Framework Adapter]
+ DocGen[Documentation Generator]
+ end
+ end
+
+ subgraph LLM["LLM Services"]
+ PromptMgr[Prompt Manager]
+ ModelRouter[Model Router]
+ TokenManager[Token Manager]
+ OutputParser[Output Parser]
+ CacheManager[Cache Manager]
+ end
+
+ subgraph Execution["Execution Services"]
+ subgraph Runner["Test Runner Service"]
+ Executor[Request Executor]
+ AuthManager[Auth Manager]
+ RateLimit[Rate Limiter]
+ Retry[Retry Manager]
+ end
+
+ subgraph Validator["Validation Service"]
+ SchemaValidator[Schema Validator]
+ LogicValidator[Business Logic Validator]
+ PerformanceValidator[Performance Validator]
+ SecurityValidator[Security Validator]
+ end
+
+ subgraph Reporter["Reporting Service"]
+ ResultCollector[Result Collector]
+ CoverageAnalyzer[Coverage Analyzer]
+ ReportGenerator[Report Generator]
+ Visualizer[Visualizer]
+ end
+ end
+
+ subgraph Data["Data Services"]
+ DB[(Database)]
+ Cache[(Cache)]
+ Storage[(Object Storage)]
+ Queue[(Message Queue)]
+ end
+
+ subgraph External["External Systems"]
+ TargetAPIs[Target APIs]
+ CISystem[CI/CD Systems]
+ AIFrameworks[AI Agent Frameworks]
+ Monitoring[Monitoring Systems]
+ end
+
+ %% Client to Gateway
+ Web --> GW
+ CLI --> GW
+ SDK --> GW
+
+ %% Gateway to Services
+ GW --> Auth
+ Auth --> Parser
+ Auth --> TestCaseGen
+ Auth --> ToolDefGen
+ Auth --> Executor
+
+ %% API Analysis Flow
+ Parser --> Analyzer
+ Analyzer --> DependencyDetector
+ Parser --> DB
+
+ %% Test Generation Flow
+ Analyzer --> TestCaseGen
+ TestCaseGen --> TestDataGen
+ TestDataGen --> TestSuiteOrg
+ TestCaseGen --> EdgeCaseGen
+ EdgeCaseGen --> TestSuiteOrg
+ TestSuiteOrg --> DB
+
+ %% Tool Generation Flow
+ Analyzer --> ToolDefGen
+ ToolDefGen --> SchemaGen
+ SchemaGen --> FrameworkAdapter
+ FrameworkAdapter --> DocGen
+ ToolDefGen --> DB
+
+ %% LLM Integration
+ TestCaseGen --> PromptMgr
+ EdgeCaseGen --> PromptMgr
+ ToolDefGen --> PromptMgr
+ LogicValidator --> PromptMgr
+ PromptMgr --> ModelRouter
+ ModelRouter --> TokenManager
+ TokenManager --> OutputParser
+ ModelRouter --> CacheManager
+ CacheManager --> Cache
+
+ %% Execution Flow
+ TestSuiteOrg --> Executor
+ Executor --> AuthManager
+ AuthManager --> RateLimit
+ RateLimit --> Retry
+ Executor --> TargetAPIs
+ TargetAPIs --> Executor
+ Executor --> SchemaValidator
+ SchemaValidator --> LogicValidator
+ LogicValidator --> PerformanceValidator
+ PerformanceValidator --> SecurityValidator
+ SchemaValidator --> ResultCollector
+ LogicValidator --> ResultCollector
+ PerformanceValidator --> ResultCollector
+ SecurityValidator --> ResultCollector
+
+ %% Reporting Flow
+ ResultCollector --> CoverageAnalyzer
+ CoverageAnalyzer --> ReportGenerator
+ ReportGenerator --> Visualizer
+ ReportGenerator --> Storage
+
+ %% Data Service Integration
+ DB <--> Parser
+ DB <--> TestSuiteOrg
+ DB <--> ToolDefGen
+ DB <--> ResultCollector
+ Queue <--> Executor
+ Storage <--> ReportGenerator
+
+ %% External Integrations
+ ReportGenerator --> CISystem
+ FrameworkAdapter --> AIFrameworks
+ Reporter --> Monitoring
+
+ %% Styling
+ classDef client fill:#3498db,stroke:#2980b9,color:white
+ classDef gateway fill:#f1c40f,stroke:#f39c12,color:black
+ classDef core fill:#27ae60,stroke:#229954,color:white
+ classDef llm fill:#9b59b6,stroke:#8e44ad,color:white
+ classDef execution fill:#e74c3c,stroke:#c0392b,color:white
+ classDef data fill:#16a085,stroke:#1abc9c,color:white
+ classDef external fill:#7f8c8d,stroke:#2c3e50,color:white
+
+ class Web,CLI,SDK client
+ class GW,Auth gateway
+ class Parser,Analyzer,DependencyDetector,TestCaseGen,TestDataGen,TestSuiteOrg,EdgeCaseGen,ToolDefGen,SchemaGen,FrameworkAdapter,DocGen core
+ class PromptMgr,ModelRouter,TokenManager,OutputParser,CacheManager llm
+ class Executor,AuthManager,RateLimit,Retry,SchemaValidator,LogicValidator,PerformanceValidator,SecurityValidator,ResultCollector,CoverageAnalyzer,ReportGenerator,Visualizer execution
+ class DB,Cache,Storage,Queue data
+ class TargetAPIs,CISystem,AIFrameworks,Monitoring external
+
+
+
+```
+
+1. **API Specification Parser**: This component handles multiple API specification formats (OpenAPI, GraphQL, gRPC, etc.) and normalizes them into a unified internal representation. I'll build on existing parsing libraries but extend them with custom logic to extract semantic meaning and relationships between endpoints.
+
+2. **LLM Integration Layer**: A provider-agnostic abstraction supporting multiple LLM services with intelligent routing, caching, and fallback mechanisms. Prompt templates will be version-controlled and systematically optimized through iterative testing to achieve the best results.
+
+3. **Test Generation Engine**: This core component uses LLMs to analyze API specifications and generate comprehensive test suites. For large APIs that might exceed context limits, I'll implement a chunking approach that processes endpoints in logical batches while maintaining awareness of their relationships.
+
+4. **Test Execution Runtime**: Once tests are generated, this component executes them against target APIs, handling authentication, implementing appropriate retry logic, respecting rate limits, and collecting comprehensive response data for validation.
+
+5. **Response Validation Service**: This combines traditional schema validation with LLM-powered semantic validation to catch subtle issues in responses that might comply with the schema but violate business logic or contain inconsistent data.
+
+6. **Tool Definition Generator**: This component converts API specifications into properly structured tool definitions for various AI frameworks, handling the specific requirements and patterns of each target framework.
+
+7. **Benchmark Framework**: The evaluation system that assesses LLM performance on standardized tasks with detailed metrics for accuracy, coverage, relevance, and efficiency.
+
+All components will be implemented in Python with comprehensive test coverage and documentation. The architecture will be modular, allowing for component reuse and independent scaling as needs evolve.
+
+For frontend integration, I can either develop integration points with your existing Flutter-based application or implement a CLI interface. The backend will expose a clear API that can be consumed by either approach. I'd welcome discussion on which option would better align with your current infrastructure and team workflows - the CLI would offer simplicity for CI/CD integration, while Flutter integration would provide a more seamless experience for existing users.
+
+## System Workflow and Interactions
+To illustrate how the components of my proposed system interact, I've created a sequence diagram showing the key workflows:
+```mermaid
+sequenceDiagram
+ actor User as "User" #ff6347
+ participant UI as "Client(API Dash UI)/CLI Interface" #4682b4
+ participant Orch as "Orchestrator" #32cd32
+ participant Parser as "API Parser" #ffa500
+ participant LLM as "LLM Service" #8a2be2
+ participant TestGen as "Test Generator" #ff1493
+ participant Runner as "Test Runner" #00ced1
+ participant Validator as "Response Validator" #ff8c00
+ participant Reporter as "Test Reporter" #9932cc
+ participant ToolGen as "Tool Generator" #ffb6c1
+ participant API as "Target API" #20b2aa
+
+ User->>UI: Upload API Spec / Define Test Scenario
+ UI->>Orch: Submit Request
+ Orch->>Parser: Parse API Specification
+ Parser-->>Orch: Structured API Metadata
+
+ Orch->>LLM: Generate Test Cases
+ LLM->>TestGen: Create Test Scenarios
+ TestGen-->>Orch: Generated Test Cases
+
+ Orch->>Runner: Execute Tests
+ Runner->>API: Send API Requests
+ API-->>Runner: API Responses
+
+ Runner->>Validator: Validate Responses
+ Validator->>LLM: Analyze Response Quality
+ LLM-->>Validator: Validation Results
+ Validator-->>Runner: Validation Results
+
+ Runner-->>Orch: Test Execution Results
+ Orch->>Reporter: Generate Reports
+ Reporter-->>UI: Display Results
+
+ alt Tool Definition Generation
+ User->>UI: Request Tool Definitions
+ UI->>Orch: Forward Request
+ Orch->>ToolGen: Generate Tool Definitions
+ ToolGen->>LLM: Optimize Tool Descriptions
+ LLM-->>ToolGen: Enhanced Descriptions
+ ToolGen-->>Orch: Framework-Specific Definitions
+ Orch-->>UI: Return Tool Definitions
+ UI-->>User: Download Definitions
+ end
+
+
+```
+This diagram demonstrates the four key workflows in the system:
+
+1. API Specification Analysis - The system ingests and parses API specifications, then uses LLM to understand them semantically.
+2. Test Generation - Using the parsed API and LLM intelligence, the system creates comprehensive test suites tailored to the API's functionality.
+3. Test Execution - Tests are run against the actual API, with responses validated both technically and semantically using LLM-powered understanding.
+4. Tool Definition Generation - The system leverages its understanding of the API to create framework-specific tool definitions that developers can immediately use.
+
+The LLM service is central to the entire workflow, providing the intelligence needed for deep API understanding, smart test generation, semantic validation, and appropriate tool definition creation.
+
+## Clarifying Questions
+
+I have some questions for more understanding:
+
+1. Which AI frameworks are highest priority for tool definition generation? Is there a specific order of importance for crewAI, langchain, pydantic-ai, and langgraph?
+
+2. Do you have preferred LLM providers that should be prioritized for integration, or should the system be designed to work with any provider through a common interface?
+
+3. Are there specific types of APIs that should be given special focus in the benchmark dataset (e.g., e-commerce, financial, IoT)?
+
+4. How will the frontend be planned? Will it be a standalone interface, an extension of an existing dashboard, or fully integrated into an API testing - API Dash client ?
+
diff --git a/doc/proposals/2025/gsoc/idea_Mohammed_Ayaan_ai_ui_designer.md b/doc/proposals/2025/gsoc/idea_Mohammed_Ayaan_ai_ui_designer.md
new file mode 100644
index 00000000..a362f109
--- /dev/null
+++ b/doc/proposals/2025/gsoc/idea_Mohammed_Ayaan_ai_ui_designer.md
@@ -0,0 +1,68 @@
+### Initial Idea Submission
+
+Full Name: Mohammed Ayaan
+University name: PES University Bangalore
+Program you are enrolled in (Degree & Major/Minor): BTech (AI/ML)
+Year: 2nd year (2023-2027)
+Expected graduation date: 2027
+
+Project Title: AI UI Designer for APIs
+Relevant issues: #617
+
+Idea description:
+
+My assumption here is that the requirement is to provide an advanced preview for API responses
+which are of type json/xml.
+
+Proposed solution -
+
+When the API responses are of type xml/json we can show additional advanced preview action buttons on the
+response widget. eg : Data table, Chart, Summary
+
+# 12-Mar-2025 - Proof of concept/ update
+
+I executed a quick proof of concept using chatgpt and the results were amazing.
+
+=> I was able to generate entire flutter widget with customizations. This essentially means that our static widget screens now
+can be mere placeholders/containers which render the widgets given by the llm codegen service.
+
+=> We need not restrict on the type of charts we can support. We can in fact take guidance from llm on what
+kind of charts can be helpful in analyzing the data and pick may be the best 5 types
+
+=> However we need to have control on the look and feel and apidash ui standards. These can be provided
+as inputs to the prompt. eg: top/bottom margins should be 5%, bar graph color must be blue, or width of
+bars must be 5 px etc.
+Sample charts generated using json data.
+
+
+
+
+
+We can design a fixed layout as per apidash ui standards (font, colors, themes) for each of the
+visualization. This would comprise the static content by reusing the existing widgets. This will ensure
+uniform look and feel for all users.
+for eg: datatable_page.dart, chart_page.dart, summary_page.dart
+
+Dynamic data based on the json/xml input would come from the codellama service. for eg: visualizations_service.dart.
+This service will interface with the codellama server by providing the json/xml as input and
+return the generated html/dart widget code or may be just the collections which needs to be placed in
+the placeholders in static screen widgets. This can be evaluated and decided as per the support provided
+by the codellama service.
+
+To ensure the kind of output(html/dart code or the transforming the json/xml data) we need,
+we can provide the llm with few shot prompts as examples.
+
+On click of Data table =>
+call visualization_service.dart to get the dynamic content from json/xml as required for data table.
+Render the widget. Widget to provide capabilities to sort and search
+On click of Chart =>
+We can ask for additional inputs like which type of chart, on what columns etc.
+call visualization_service.dart to get the dynamic content from json/xml as required for chart widget.
+Render the widget. Multiple chart types can be supported like bar graph, pie, bubble etc.
+On click of Summary =>
+We can provide some options like top 5 data points(eg: positive reviews/ratings), group by
+or outliers, book summary, weather report etc
+call visualization_service.dart to get the dynamic content from json/xml as required for summary widget.
+
+Additionally I can perhaps also contribute to the har importer project. I do have some exposure to har formats.
+I will analyze and update the idea shortly.
diff --git a/doc/proposals/2025/gsoc/idea_Nideesh_Bharath_Kumar_AI_API_Eval.md b/doc/proposals/2025/gsoc/idea_Nideesh_Bharath_Kumar_AI_API_Eval.md
new file mode 100644
index 00000000..34c63dd8
--- /dev/null
+++ b/doc/proposals/2025/gsoc/idea_Nideesh_Bharath_Kumar_AI_API_Eval.md
@@ -0,0 +1,54 @@
+# AI API Eval Framework For Multimodal Generative AI
+
+## Personal Information
+- **Full Name:** Nideesh Bharath Kumar
+- **University Name:** Rutgers University–New Brunswick
+- **Program Enrolled In:** B.S. Computer Science, Artificial Intelligence Track
+- **Year:** Junior Year (Third Year)
+- **Expected Graduation Date:** May 2026
+
+## About Me
+I’m **Nideesh Bharath Kumar**, a junior (third year) in Rutgers University–New Brunswick taking a **B.S. in Computer Science on the Artificial Intelligence Track**. I have a strong foundation in full stack development and AI engineering: I have project and internship experience in technologies like: **Dart/Flutter, LangChain, RAG, Vector Databases, AWS, Docker, Kubernetes, PostgreSQL, FastAPI, OAuth,** and other technologies that aid in developing scalable and AI-powered systems. I have interned at **Manomay Tech, IDEA, and Newark Science and Sustainability,** developing scalable systems and managing AI systems and completed fellowships with **Google** and **Codepath**, developing my technical skills. I’ve also won awards in hackathons, achieving **Overall Best Project in the CS Base Climate Hackathon for a Flutter-based project** and **Best Use of Terraform in the HackRU Hackathon for an Computer Vision Smart Shopping Cart**. I’m passionate about building distributed, scalable systems and AI technologies, and API Dash is an amazing tool that can facilitate in the process of building these solutions through easy visualization and testing of APIs; I believe my skills in **AI development** and experience with **Dart/Flutter** and **APIs** put me in a position to effectively contribute to this project.
+
+## Project Details
+**Project Title:** AI API Eval Framework For Multimodal Generative AI
+**Description:**
+This project is to develop a **Dart-centered evaluation framework** designed to simplify the testing of generative AI models across **multiple types (text, image, code)**. This will be done by integrating evaluation toolkits: **llm-harness** for text, **torch-fidelity** and **CLIP** for images, and **HumanEval/MBPP** with **CodeBLEU** for code. This project will provide a unified config layer which can support standard and custom benchmark datasets and evaluation metrics. This will be done by providing a **user-friendly interface in API Dash** which allows the user to select model type, dataset management (local or downloadable), and evaluation metrics (standard toolkit or custom script). On top of this, **real-time visual analytics** will be provided to visualize the progress of the metrics as well as **parallelized batch processing** of the evaluation.
+
+**Related Issue:** - [#618](https://github.com/foss42/apidash/issues/618)
+
+**Key Features:**
+1) Unified Evaluation Configuration:
+ - A config file in YAML will serve as the abstraction layer, which will be generated by the user's selection of model type, dataset, and evaluation metrics. This will redirect the config to either use llm-harness, torch-fidelity and CLIP, or HumanEval and MBPP with CodeBLEU. Additionally, custom evaluation scripts and datasets can be attached to this config file which can be interpreted by the systems.
+ - This abstraction layer ensures that whether any of these specifications are different for the eval job, all of it will be redirected to the correct resources while still providing a centralized layer for creating the job. Furthermore, these config files can be stored in history for running the same jobs later.
+
+2) Intuitive User Interface
+ - When starting an evaluation, users can select the model type (text, image, or code) through a drop-down menu. The system will provide a list of standard datasets and use cases. The user can select these datasets, or attach a custom one. If the user does not have this dataset locally in the workspace, they can attach it using file explorer or download it from the web. Furthermore, the user can select standard evaluation metrics from a list or attach a custom script.
+
+3) Standard Evaluation Pipelines
+ - The standard evaluation pipelines include text, image, and code generation.
+ - For text generation, llm-harness will be used, and utilize custom datasets and tasks to measure Precision, Recall, F1 Score, BLEU, ROUGE, and Perplexity. Custom integration of datasets and evaluation scores can be done through interfacing the llm-harness custom test config file.
+ - For image generation, torch-fidelity can be used to calculate Fréchet Inception Distance and Inception Score by comparing against a reference image database. For text to image generation, CLIP scores can be used to ensure connection between prompt and generated image. Custom integration of datasets and evaluation scores can be done through a custom interface created using Dart.
+ - For code generation, tests like HumanEval and MBPP can be used for functional correctness and CodeBLEU can be used for code quality checking. Custom integration will be done the same way as image generation, with a custom interface created using Dart for functional test databases and evaluation metrics.
+
+4) Batch Evaluations
+ - Parallel Processing will be supported by async runs of the tests, where a progress bar will monitor the number of processed rows in API Dash.
+
+5) Visualizations of Results
+ - Visualizations of results will be provided as the tests are running, providing live feedback of model performance, as well as a general summary of visualizations after all evals have been run.
+ - Bar Graphs: These will be displayed from a range of 0 to 100% accuracy to visualize a quick performance comparison across all tested models.
+ - Line Charts: These will be displayed to show performance trends over time of models, comparing model performance across different batches as well as between each model.
+ - Tables: These will provide detailed summary statistics about scores for each model across different benchmarks and datasets.
+ - Box Plots: These will show the distribution of scores per batch, highlighting outliers and variance, while also having side-by-side comparisons with different models.
+
+6) Offline and Online Support
+ - Offline: Models that are offline will be supported by pointing to the script the model uses to run, and datasets that are locally stored.
+ - Online: These models can be connected for eval through an API endpoint, and datasets can be downloaded with access to the link.
+
+**Architecture:**
+1) UI Interface: Built with Dart/Flutter
+2) Configuration Manager: Built with Dart, uses YAML for config file
+3) Dataset Manager: Built with Dart, REST APIs for accessing endpoints
+4) Evaluation Manager: Built with a Dart - Python layer to manage connections between evaluators and API Dash
+5) Batch Processing: Built with Dart Async requests
+6) Visualization and Results: Built with Dart/Flutter, using packages like fl_chart and syncfusion_flutter_charts
diff --git a/doc/proposals/2025/gsoc/idea_NingWei_AI UI Designer for APIs.md b/doc/proposals/2025/gsoc/idea_NingWei_AI UI Designer for APIs.md
new file mode 100644
index 00000000..b25d35cd
--- /dev/null
+++ b/doc/proposals/2025/gsoc/idea_NingWei_AI UI Designer for APIs.md
@@ -0,0 +1,129 @@
+# GSoC 2025 Proposal: AI UI Designer for APIs
+
+## About
+
+**Full Name**: Ning Wei
+**Contact Info**: Allenwei0503@gmail.com
+**Discord Handle**: @allen_wn
+**GitHub Profile**: [https://github.com/AllenWn](https://github.com/AllenWn)
+**LinkedIn**: [https://www.linkedin.com/in/ning-wei-allen0503](https://www.linkedin.com/in/ning-wei-allen0503)
+**Time Zone**: UTC+8
+**Resume**: https://drive.google.com/file/d/1Zvf1IhKju3rFfnDsBW1WmV40lz0ZMNrD/view?usp=sharing
+
+## University Info
+
+**University**: University of Illinois at Urbana-Champaign
+**Program**: B.S. in Computer Engineering
+**Year**: 2nd year undergraduate
+**Expected Graduation**: May 2027
+
+---
+
+## Motivation & Past Experience
+
+1. **Have you worked on or contributed to a FOSS project before?**
+Not yet officially, but I’ve been actively exploring open source projects like API Dash and contributing via discussion and design planning. I am currently studying the API Dash repository and developer guide to prepare for my first PR.
+
+2. **What is your one project/achievement that you are most proud of? Why?**
+I'm proud of building an AI-assisted email management app using Flutter and Go, which automatically categorized and responded to emails using ChatGPT API. It gave me end-to-end experience in integrating APIs, generating dynamic UIs, and designing developer-friendly tools.
+
+3. **What kind of problems or challenges motivate you the most to solve them?**
+I enjoy solving problems that eliminate repetitive work for developers and improve workflow productivity — especially through automation and AI integration.
+
+4. **Will you be working on GSoC full-time?**
+Yes. I will be dedicating full-time to this project during the summer.
+
+5. **Do you mind regularly syncing up with the project mentors?**
+Not at all — I look forward to regular syncs and feedback to align with the project vision.
+
+6. **What interests you the most about API Dash?**
+API Dash is focused on improving the developer experience around APIs, which is something I care deeply about. I love the vision of combining UI tools with AI assistance in a privacy-first, extensible way.
+
+7. **Can you mention some areas where the project can be improved?**
+- More intelligent code generation from API response types
+- Drag-and-drop UI workflow
+- Visual previews and theming customization
+- Integration with modern LLMs for field-level naming and layout suggestions
+
+---
+
+## Project Proposal Information
+
+### Proposal Title
+
+AI UI Designer for APIs
+
+# Relevant Issues: [#617]
+
+### Abstract
+
+This project aims to develop an AI-powered assistant within API Dash that automatically generates dynamic user interfaces (UI) based on API responses (JSON/XML). The goal is to allow developers to instantly visualize, customize, and export usable Flutter UI code from raw API data. The generated UI should adapt to the structure of the API response and be interactive, with features like sorting, filtering, and layout tweaking. This tool will streamline frontend prototyping and improve developer productivity.
+
+---
+
+### Detailed Description
+
+The AI UI Designer will be a new feature integrated into the API Dash interface, triggered by a button after an API response is received. It will analyze the data and suggest corresponding UI layouts using Dart/Flutter widgets such as `DataTable`, `Card`, or `Form`.
+
+#### Step 1: Parse API Response Structure
+
+- Focus initially on JSON (XML can be added later)
+- Build a recursive parser to convert the API response into a schema-like tree
+- Extract field types, array/object structure, nesting depth
+- Identify patterns (e.g., timestamps, prices, lists)
+
+#### Step 2: Design AI Agent Logic
+
+- Use a rule-based system to map schema to UI components
+ - List of objects → Table
+ - Simple object → Card/Form
+ - Number over time → Line Chart (optional)
+- Integrate LLM backend (e.g., Ollama, GPT API) to enhance:
+ - Field labeling
+ - Layout suggestion
+ - Component naming
+
+#### Step 3: Generate UI in Flutter
+
+- Dynamically generate:
+ - `DataTable`, `Card`, `TextField`, `Dropdown`, etc.
+ - Optional chart widgets (e.g., `fl_chart`)
+- Support:
+ - Layout rearrangement (form-based or drag-drop)
+ - Field visibility toggles
+ - Previewing final UI
+
+#### Step 4: Export UI Code
+
+- Export generated layout as Dart code
+- Allow download or copy-to-clipboard
+- Support JSON config export (optional for renderer-based architecture)
+
+#### Step 5: Integrate into API Dash
+
+- Add AI UI Designer button in the API response view
+- Launch UI editing pane inside app
+- Ensure local-only, privacy-friendly execution
+- Write tests, docs, and polish UX
+
+---
+
+## Weekly Timeline (Tentative)
+
+| Week | Milestone |
+|------|-----------|
+| Community Bonding | Join Discord, interact with mentors, finalize approach, get feedback |
+| Week 1–2 | Build and test JSON parser → generate basic schema |
+| Week 3–4 | Implement rule-based UI mapper; generate simple widgets |
+| Week 5–6 | Integrate initial Flutter component generator; allow basic UI previews |
+| Week 7 | Midterm Evaluation |
+| Week 8–9 | Add customization options (visibility, layout) |
+| Week 10 | Integrate AI backend (e.g., Ollama/GPT) for suggestions |
+| Week 11–12 | Add export functions (code, JSON config) |
+| Week 13 | Final polish, tests, docs |
+| Week 14 | Final Evaluation, feedback, and delivery |
+
+---
+
+Thanks again for your time and guidance. I’ve already started studying the API Dash codebase and developer guide, and I’d love your feedback on this plan — does it align with your vision?
+If selected, I’m excited to implement this project. If this idea is already taken, I’m open to switching to another API Dash project that fits my background.
diff --git a/doc/proposals/2025/gsoc/idea_april_lin_api_explorer.md b/doc/proposals/2025/gsoc/idea_april_lin_api_explorer.md
new file mode 100644
index 00000000..2fc205e2
--- /dev/null
+++ b/doc/proposals/2025/gsoc/idea_april_lin_api_explorer.md
@@ -0,0 +1,129 @@
+### Initial Idea Submission
+
+**Full Name:** April Lin
+**University Name:** University of Illinois Urbana-Champaign
+**Program (Degree & Major/Minor):** Master in Electrical and Computer Engineering
+**Year:** first year
+**Expected Graduation Date:** 2026
+
+**Project Title:** API Explorer
+**Relevant Issues:** [https://github.com/foss42/apidash/issues/619](https://github.com/foss42/apidash/issues/619)
+
+**Idea Description:**
+
+I have divided the design of the API explorer into three major steps:
+
+1. **Designing the UI**
+2. **Designing the API template model**
+3. **Using AI tools to automatically extract API information from a given website**
+
+---
+
+## 1. UI Design (User Journey)
+
+In this step, I primarily designed two interfaces for the API explorer: the first is the main API Explorer interface, and the second is a detailed interface for each API template.
+
+### API Explorer
+
+1. **Accessing the API Explorer**
+ - In the left-hand sidebar, users will find an “API Explorer” icon.
+ - Clicking this icon reveals the main API template search interface on the right side of the screen.
+
+2. **Browsing API Templates**
+ - At the top of the main area, there is a search bar that supports fuzzy matching by API name.
+ - Directly beneath the search bar are category filters (e.g., AI, Finance, Web3, Social Media).
+ - Users can click “More” to view an expanded list of all available categories.
+ - The page displays each template in a **card layout**, showing the API’s name, a short description, and (optionally) an image or icon.
+
+### API Templates
+
+1. **Selecting a Template**
+ - When a user clicks on a card (for example, **OpenAI**), they navigate to a dedicated page for that API template.
+ - This page lists all the available API endpoints or methods in a collapsible/expandable format (e.g., “API name 2,” “API name 3,” etc.).
+ - Each listed endpoint describes what it does—users can select which methods they want to explore or import into their workspace.
+
+2. **Exploring an API Method**
+ - Within this detailed view, users see request details such as **HTTP method**, **path**, **headers**, **body**, and **sample response**.
+ - If the user wants to try out an endpoint, they can import it into their API collections by clicking **import**.
+ - Each method will include all the fields parsed through the automated process. For the detailed API field design, please refer to **Step Two**.
+
+---
+
+## 2. Updated Table Design
+
+Below is the model design for the API explorer.
+
+### **Base Table: `api_templates`**
+- **Purpose:**
+ Stores the common properties for all API templates, regardless of their type.
+
+- **Key Fields:**
+ - **id**:
+ - Primary key (integer or UUID) for unique identification.
+ - **name**:
+ - The API name (e.g., “OpenAI”).
+ - **api_type**:
+ - Enumerated string indicating the API type (e.g., `restful`, `graphql`, `soap`, `grpc`, `sse`, `websocket`).
+ - **base_url**:
+ - The base URL or service address (applicable for HTTP-based APIs and used as host:port for gRPC).
+ - **image**:
+ - A text or string field that references an image (URL or path) representing the API’s logo or icon.
+ - **category**:
+ - A field (array or string) used for search and classification (e.g., "finance", "ai", "devtool").
+ - **description**:
+ - Textual description of the API’s purpose and functionality.
+
+### **RESTful & GraphQL Methods Table: `api_methods`**
+- **Purpose:**
+ Manages detailed configurations for individual API requests/methods, specifically tailored for RESTful and GraphQL APIs.
+
+- **Key Fields:**
+ - **id**:
+ - Primary key (UUID).
+ - **template_id**:
+ - Foreign key linking back to `api_templates`.
+ - **method_type**:
+ - The HTTP method (e.g., `GET`, `POST`, `PUT`, `DELETE`) or the operation type (`query`, `mutation` for GraphQL).
+ - **method_name**:
+ - A human-readable name for the method (e.g., “Get User List,” “Create Order”).
+ - **url_path**:
+ - The relative path appended to the `base_url` (for RESTful APIs).
+ - **description**:
+ - Detailed explanation of the method’s functionality.
+ - **headers**:
+ - A JSON field storing default header configurations (e.g., `Content-Type`, `Authorization`).
+ - **authentication**:
+ - A JSON field for storing default authentication details (e.g., Bearer Token, Basic Auth).
+ - **query_params**:
+ - A JSON field for any default query parameters (optional, typically for RESTful requests).
+ - **body**:
+ - A JSON field containing the default request payload, including required fields and default values.
+ - **sample_response**:
+ - A JSON field providing an example of the expected response for testing/validation.
+
+---
+
+## 3. Automated Extraction (Parser Design)
+
+I think there are two ways to design the automated pipeline: the first is to use AI tools for automated parsing, and the second is to employ a rule-based approach.
+
+### **AI-Based Parser**
+- For each parser type (OpenAPI, HTML, Markdown), design a dedicated prompt agent to parse the API methods.
+- The prompt includes model fields (matching the data structures from [Step Two](#2-updated-table-design)) and the required API category, along with the API URL to be parsed.
+- The AI model is instructed to output the parsed result in **JSON format**, aligned with the schema defined in `api_templates` and `api_methods`.
+
+### **Non-AI (Rule-Based) Parser**
+- **OpenAPI**: Use existing libraries (e.g., Swagger/OpenAPI parser libraries) to read and interpret JSON or YAML specs.
+- **HTML**: Perform DOM-based parsing or use regex patterns to identify endpoints, parameter names, and descriptions.
+- **Markdown**: Utilize Markdown parsers (e.g., remark, markdown-it) to convert the text into a syntax tree and extract relevant sections.
+
+## Questions
+
+1. **Database Selection**
+ - Which type of database should be used for storing API templates and methods? Are there any specific constraints or preferences (e.g., relational vs. NoSQL, performance requirements, ease of integration) we should consider?
+
+2. **Priority of Automated Parsing**
+ - What is the preferred approach for automated parsing of OpenAPI/HTML files? Would an AI-based parsing solution be acceptable, or should we prioritize rule-based methods for reliability and simplicity?
+
+3. **UI Interaction Flow**
+ - Can I add a dedicated “api explorer” menu in the left navigation bar for api explorer?
diff --git a/doc/proposals/2025/gsoc/idea_balasubramaniam_api_explorer.md b/doc/proposals/2025/gsoc/idea_balasubramaniam_api_explorer.md
new file mode 100644
index 00000000..ee5b39ec
--- /dev/null
+++ b/doc/proposals/2025/gsoc/idea_balasubramaniam_api_explorer.md
@@ -0,0 +1,101 @@
+# **Initial Idea Submission : API Explorer**
+
+**Full Name:** BALASUBRAMANIAM L
+**University Name:** Saveetha Engineering College
+**Program (Degree & Major/Minor):** Bachelor of technology Machine Learning
+**Year:** First year
+**Expected Graduation Date:** 2028
+
+**Project Title:** API Explorer
+**Relevant Issues:** [https://github.com/foss42/apidash/issues/619](https://github.com/foss42/apidash/issues/619)
+
+## **Project Overview**
+
+Our goal is to enhance API Dash by adding an API Explorer feature. This feature allows users to discover, browse, search, and import pre-configured API endpoints for testing and exploration. All API templates will be maintained in YAML, JSON, HTML, and Markdown formats within a dedicated folder in the existing Apidash GitHub repository.
+
+In the initial phase, contributors can manually add new API definition files (YAML, JSON, HTML, and MD) to the repo, run a local Dart script to process them into structured JSON format, and then commit and push the updated files. A Dart cron job will periodically check for new or modified API files and process them automatically. In the future, we plan to automate this process fully with GitHub Actions.
+
+---
+
+### **Key Concepts**
+
+- **File Addition:**
+ Contributors add new API files (YAML, JSON, HTML, or MD) to a designated folder (`/apis/`) in the Apidash repository.
+
+- **Local Processing:**
+ A local Dart script (e.g., `process_apis.dart`) runs to:
+ - Read the files.
+ - Parse and extract essential API details (title, description, endpoints, etc.).
+ - Auto-generate sample payloads when examples are missing.
+ - Convert and save the processed data as JSON files in `/api_templates/`.
+
+- **Automated Fetching & Processing with Dart Cron Job:**
+ - A Dart cron-like package will schedule the script to fetch and process **new and updated** API files **weekly or on demand**.
+ - This reduces the need for constant manual execution and ensures templates stay up to date.
+
+- **Version Control:**
+ Contributors create a PR with both the raw YAML files and the generated JSON files to GitHub.
+
+- **Offline Caching with Hive:**
+ - The Flutter app (API Explorer) will fetch JSON templates and store them using **Hive**.
+ - This ensures **fast loading and offline access**.
+
+- **Fetching Updates via GitHub Releases (ZIP files):**
+ - Instead of fetching updates via the GitHub API (which has rate limits), we can leverage **GitHub Releases**.
+ - A new release will be created weekly or when at least 10 updates are made.
+ - The Flutter app will download and extract the latest ZIP release instead of making multiple API calls.
+
+---
+
+### **Step-by-Step Workflow**
+
+1. **Adding API Files:**
+ - A contributor creates or updates an API file (e.g., `weather.yaml`) in the `/apis/` folder.
+
+2. **Running the Local Processing Script (Manually):**
+ - A Dart script (`process_apis.dart`) is executed locally:
+ `dart run process_apis.dart`
+ - The script:
+ - Reads YAML files from `/apis/`.
+ - Identifies the file format (YAML, JSON, HTML, MD).
+ - Parses the content accordingly.
+ - Extracts essential API details (title, description, endpoints, etc.).
+ - Generates structured JSON templates in `/api_templates/`.
+
+3. **Review, Commit, and PR Submission:**
+ - Contributors review the generated JSON files.
+ - They commit both raw API definition files and generated JSON files.
+ - Submit a **Pull Request (PR)** for review.
+
+4. **Offline Storage with Hive (Flutter Frontend):**
+ - The Flutter app fetches JSON templates and stores them in Hive.
+ - This ensures users can access API templates even when offline.
+
+5. **Fetching Updates via GitHub Releases:**
+ - A new **GitHub Release** (ZIP) will be created weekly or when at least 10 updates are made.
+ - The Flutter app will **download and extract** the latest ZIP instead of making multiple API calls.
+ - This approach avoids GitHub API rate limits and ensures a smooth user experience.
+
+
+
+
+
+---
+
+## **Future Automation with GitHub Actions**
+
+In the future, we can fully automate this process:
+
+- A GitHub Action will trigger on updates to `/apis/`.
+- It will run the Dart processing script automatically.
+- The action will commit the updated JSON templates back to the repository.
+- A GitHub Release will be generated periodically to bundle processed files for easier access.
+- This ensures **continuous and consistent updates** without manual intervention.
+
+---
+
+## **Conclusion**
+
+This Approach provides a simple and controlled method for processing API definitions. The use of a **Dart cron job** reduces manual effort by fetching and processing updates on a scheduled basis, while **Hive storage** ensures fast offline access in the Flutter app. Using **GitHub Releases (ZIP)** allows fetching updates efficiently without hitting rate limits. Once validated, we can transition to **GitHub Actions** for complete automation. This approach aligns well with our project goals and scalability needs.
+
+**I look forward to your feedback and suggestions on this approach. Thank you!**
diff --git a/doc/proposals/2025/gsoc/idea_harsh_panchal_AI_API_EVAL.md b/doc/proposals/2025/gsoc/idea_harsh_panchal_AI_API_EVAL.md
new file mode 100644
index 00000000..df60ddf8
--- /dev/null
+++ b/doc/proposals/2025/gsoc/idea_harsh_panchal_AI_API_EVAL.md
@@ -0,0 +1,53 @@
+# Initial Idea Submission
+
+**Full Name:** Harsh Panchal
+**Email:** [harsh.panchal.0910@gmail.com](mailto:harsh.panchal.0910@gmail.com)
+● [**GitHub**](https://github.com/GANGSTER0910)
+● [**Website**](https://harshpanchal0910.netlify.app/)
+● [**LinkedIn**](https://www.linkedin.com/in/harsh-panchal-902636255)
+
+**University Name:** Ahmedabad University, Ahmedabad
+**Program:** BTech in Computer Science and Engineering
+**Year:** Junior, 3rd Year
+**Expected Graduation Date:** May 2026
+**Location:** Gujarat, India.
+**Timezone:** Kolkata, INDIA, UTC+5:30
+
+## **Project Title: AI API Eval Framework**
+
+
+## **Relevant Issues: [#618](https://github.com/foss42/apidash/issues/618)**
+**
+
+## **Idea Description**
+The goal of this project is to create an AI API Evaluation Framework that provides an end-to-end solution to compare AI models on different kinds of data, i.e., text, images, and videos. The overall strategy is to use benchmark models to compare AI outputs with benchmark predictions. Metrics like BLEU, ROUGE, FID, and SSIM can also be utilized by the users to perform an objective performance evaluation of models.
+
+For the best user experience in both offline and online mode, the platform will provide an adaptive assessment framework where users can specify their own assessment criteria for flexibility in dealing with various use cases. Their will be a feature of modern version control which will enable users to compare various versions of model and moniter performance over time. For the offline mode the evalutions will be supported using LoRA models which reduces resource consumption and will give outputs without compromising with accuracy. The system will use Explainability Integration with SHAP and LIME to demonstrate how things influence model decisions.
+
+The visualization dashboard, built using Flutter, will include real-time charts, error analysis, and result summarization, making it easy to analyze the model performance. Offline with cached models or online with API endpoints, the framework will offer end-to-end testing.
+
+With its rank-based framework, model explainability, and evaluatable configuration, this effort will be a powerful resource for researchers, developers, and organizations to make data-driven decisions on AI model selection and deployment.
+
+## Unique Features
+1) Benchmark-Based Ranking:
+Compare and rank model results against pre-trained benchmark models.
+Determine how well outputs resemble perfect predictions.
+2) Advanced Evaluation Metrics:
+Facilitate metrics such as BLEU, ROUGE, FID, SSIM, and PSNR for extensive analysis.
+Allow users to define custom metrics.
+3) Model Version Control:
+Compare various versions of AI models.
+Monitor improvement in performance over time with side-by-side comparison.
+4) Explainability Integration:
+Employ SHAP and LIME to explain model decisions.
+Provide clear explanations of why some outputs rank higher.
+5) Custom Evaluation Criteria:
+Allow users to input custom evaluation criteria for domain-specific tasks.
+6) Offline Mode with LoRA Models:
+Storage and execution efficiency with low-rank adaptation models.
+Conduct offline evaluations with minimal hardware demands.
+7) Real-Time Visualization:
+Visualize evaluation results using interactive charts via Flutter.
+Monitor performance trends and detect weak spots visually.
+
+
diff --git a/doc/proposals/2025/gsoc/idea_udhay_adithya_mem0.md b/doc/proposals/2025/gsoc/idea_udhay_adithya_mem0.md
new file mode 100644
index 00000000..f363c5f6
--- /dev/null
+++ b/doc/proposals/2025/gsoc/idea_udhay_adithya_mem0.md
@@ -0,0 +1,284 @@
+## INITIAL IDEA PROPOSAL
+
+### **CONTACT INFORMATION**
+
+* Name: Udhay Adithya J
+* Email: [udhayxd@gmail.com](mailto:udhayxd@gmail.com)
+* [Github](https://github.com/Udhay-Adithya)
+* [Website](https://udhay-adithya.me)
+* [LinkedIn](https://www.linkedin.com/in/udhay-adithya/)
+* Location: Amravati, Andhra Pradesh, India, UTC+5:30
+* University: Vellore Institute of Technology, Andhra Pradesh
+* Major: Computer Science & Engineering
+* Degree: Bachelor of Technology
+* Year: Sophomore, 2nd Year
+* Expected graduation date: 2027
+
+
+### **PROJECT TITLE: [mem0](https://github.com/mem0ai/mem0) for Dart**
+
+### **PROJECT DESCRIPTION:**
+
+mem0 is the goto memory layer for developing personalized AI Agents in Python. It offers comprehensive memory management, self-improving memory capabilities, cross-platform consistency, and centralized memory control. It leverages advanced LLMs and algorithms to detect, store, and retrieve memories from conversations and interactions. It identifies key information such as facts, user preferences, and other contextual information, smartly updates memories over time by resolving contradictions, and supports the development of an AI Agent that evolves with the user interactions. When needed, mem0 employs a smart search system to find memories, ranking them based on relevance, importance, and recency to ensure only the most useful information is presented.
+
+However, a critical gap exists in the Flutter ecosystem for a dedicated memory layer tailored for AI agent development. Flutter, with its cross-platform capabilities and vibrant community, is increasingly becoming a platform of choice for building mobile and embedded AI applications.
+
+This project proposes to bridge this gap by porting the powerful [mem0](https://github.com/mem0ai/mem0) library from Python to Dart, thereby making its advanced memory management features accessible to Flutter developers.
+
+### **PROJECT GOALS:**
+
+The primary goal of this project is to create a fully functional Dart port of the `mem0` library.
+
+Upon successful completion of this project, we will have:
+
+* `mem0_dart` as a standalone package in [pub.dev](https://pub.dev/).
+* Seamless integration of `mem0_dart` into Flutter applications, enabling developers to build personalized AI agents on mobile, web, and desktop platforms.
+
+### **IMPLEMENTATION PROCESS**
+
+Porting `mem0` from Python to Dart is a incremental process and features can be added in a sequence, we have the flexibilty to build block-by-block.
+
+I have also opened a discussion([#2373](https://github.com/mem0ai/mem0/discussions/2373)) about this in mem0’s GitHub Discussions
+
+
+**These are some of the packages that are required and will be used in this project:**
+
+**→ LLM and Embedding Support Packages**
+
+- [anthropic_sdk_dart](https://pub.dev/packages/anthropic_sdk_dart)
+- [aws_client](https://pub.dev/packages/aws_client)
+- [googleai_dart](https://pub.dev/packages/googleai_dart/versions)
+- [groq](https://pub.dev/packages/groq)
+- [openai_dart](https://pub.dev/packages/openai_dart)
+- [ollama_dart](https://pub.dev/packages/ollama_dart)
+- [vertex_ai](https://pub.dev/packages/vertex_ai)
+- [together_ai_sdk](https://pub.dev/packages/together_ai_sdk)
+
+
+**→ Vector Store Packages**
+
+- [chromadb](https://pub.dev/packages/chromadb)
+- [elastic_client](https://pub.dev/packages/elastic_client)
+- [pgvector](https://pub.dev/packages/pgvector)
+- [qdrant](https://pub.dev/packages/qdrant)
+- [redis](https://pub.dev/packages/redis)
+
+**→ Other packages**
+
+- [http](https://pub.dev/packages/http)/[dio](https://pub.dev/packages/dio)
+- [neo4j_http_client](https://pub.dev/packages/neo4j_http_client)
+- [huggingface_dart](https://pub.dev/packages/huggingface_dart)
+- [posthog_flutter](https://pub.dev/packages/posthog_flutter)
+
+
+**Note**: The following are some of the packages used in the original `mem0` project that are currently unavailable in the Dart ecosystem. I am actively searching for alternatives, and this list will be updated as suitable replacements are found.
+
+- **litellm** – N/A
+- **azure-search-documents** – N/A
+- **opensearch** – N/A
+- **milvus** – No direct Dart package, but it can be accessed via HTTP APIs.
+
+### **USAGE**
+
+The `mem0_dart` library will be designed to be intuitive and easy to integrate into Flutter applications. Here's a basic example illustrating its intended usage:
+
+**Initialization:**
+
+```dart
+import 'package:mem0_dart/mem0_dart.dart';
+
+void main() async {
+ // Initialize Mem0 with default configurations or custom settings
+ final memory = Memory(); // Using default configurations
+
+ // Or with custom configurations (example - Qdrant vector store)
+ final memoryWithQdrant = Memory(
+ vectorStoreConfig: VectorStoreConfig(
+ provider: 'qdrant',
+ config: {
+ 'collection_name': 'my_flutter_memories',
+ 'embedding_model_dims': 1536, // Example dimension
+ 'path': '/path/to/qdrant/db', // Example path for local Qdrant
+ },
+ ),
+ llmConfig: LlmConfig( // Example - OpenAI LLM for memory processing
+ provider: 'openai',
+ config: {
+ 'model': 'gpt-4o-mini',
+ 'apiKey': 'YOUR_OPENAI_API_KEY',
+ },
+ ),
+ embedderConfig: EmbedderConfig( // Example - OpenAI Embedding model
+ provider: 'openai',
+ config: {
+ 'model': 'text-embedding-3-small',
+ 'apiKey': 'YOUR_OPENAI_API_KEY',
+ },
+ ),
+ );
+
+ // ... rest of your Flutter app code
+}
+```
+
+**Adding Memories:**
+
+```dart
+// ... inside your Flutter widget or logic
+
+ String userMessage = "I love Flutter and Dart!";
+ String userId = "flutter_dev_123";
+
+ try {
+ final memoryResult = await memory.add(
+ messages: userMessage,
+ userId: userId,
+ );
+ print('Memory Added: ${memoryResult}');
+ } catch (e) {
+ print('Error adding memory: $e');
+ }
+```
+
+## **MILESTONES AND DELIVERABLES**
+
+
+I propose to divide the project into four milestones/deliverables to produce a sequential progress report through the GSoC.
+
+*They are NOT of equal sizes/time* requirements.
+
+#### **Milestone #1: Create a Bare-bone `package: mem0_dart`.**
+
+This milestone will lay the foundation of `mem0_dart` library, porting core configuration classes and setting up testing frameworks.
+
+This milestone will be the first deliverable but it will take the least amount of time and effort to build.
+
+#### **Milestone #2: Vector Store Integrations**
+
+This milestone will integrate and implement multiple Vecotor DB providers available in Dart.
+
+Implementation efforts will concentrate more on the core methods (insert, search, delete, and get).
+
+#### **Milestone #3: LLM and Embedding Model Integration with Basic Functionality**
+
+This Milestone is divided into two parts. The first path will aim at adding integrations for multiple Embedding models.
+
+The second part will add integrations for LLMs and additionally the core memory operations for addition, retrieval, and updating will be ported and tested.
+
+#### **Milestone #4: Add Graph Memory Capabilty**
+
+The final milestone of the project proposes to add Graph memory capabilities by implementing Neo4j integration.
+
+## **[GSOC 2025 TIMELINE](https://developers.google.com/open-source/gsoc/timeline) FOR REFERENCE**
+
+
+**May 8 - 18:00 UTC**
+* Accepted GSoC contributor projects announced
+
+**May 8 - June 1**
+* Community Bonding Period | GSoC contributors get to know mentors,
+read documentation, and get up to speed to begin working on their
+projects
+
+**June 2**
+* Coding officially begins!
+
+**July 14 - 18:00 UTC**
+* Mentors and GSoC contributors can begin submitting midterm evaluations
+
+**July 18 - 18:00 UTC**
+* Midterm evaluation deadline (standard coding period)
+
+**July 14 - August 25**
+* Work Period | GSoC contributors work on their project with guidance from Mentors
+
+**August 25 - September 1 - 18:00 UTC**
+* Final week: GSoC contributors submit their final work product and
+their final mentor evaluation (standard coding period)
+
+## **PREDICTED PROJECT TIMELINE**
+* **Community Bonding Period (May 8 - June 1)**
+
+ This is the period where I will get to know my mentors better. I will also ask questions and attempt to clarify the doubts and queries in my mind, to get a clear understanding of the project. Although Google recommends this 3-week bonding period to be entirely for the introduction of GSoC Contributors into their projects, since we are going to build a brand new package, I propose to begin coding from the 2nd or 3rd week of this period, thus adding a headstart.
+
+* **Coding Period (June 2 - July 14)**
+ * **Week 1 (June 2 - June 8)**
+
+ M#1 is delivered, comprising of a bare-bones package.
+
+ Work on M#2 begins in the latter half of the week.
+ * **Week 2 (June 9 - June 15)**
+
+ Integration of first two chosen vector database and implementation of basic vector operations(insert, search, delete, and get) are done, along with unit tests.
+
+ * **Week 3 (June 16 - June 22)**
+
+ Building upon the previous week, the integration of the other remaining vector databases will be done.
+
+ * **Week 4 (June 16 - June 22)**
+
+ Based on the implementation experience and testing feedback from the Vector Database integrations, refinements will be made to ensure robustness and efficiency.
+
+ * **Week 5 (June 23 - June 29)**
+
+ Start of M#3 by integrating OpenAI and Gemini's LLM and embedding models.
+
+ This week will also contain steps of documenting the package for initial public release.
+
+ Mentor Reviews are requested.
+
+ *`The first public release of package mem0_dart:0.0.1 is made.`*
+
+ * **Week 6 (June 30 - July 6)**
+
+ Changes follow, from Mentor Review, if required.
+
+ Add support for other LLM and Embedding model providers(Since the same process has been done earlier this must be fairly easy to implement).
+
+ Final Mentor Review before Mid-term Evaluation is submitted.
+
+* **Midterm Evaluation Submission (July 14 - July 18)**
+ * Projects are submitted to the mentors and the GSoC portal.
+
+* **Work Period (July 14 - August 25)**
+ * **Week 7 (July 14 - July 20)**
+
+ A significant portion of the week will be dedicated to testing all integrations thoroughly and addressing any bugs or issues identified.
+
+ Documentation is enhanced in the if no issues arise.
+ Milestone #3 is delivered.
+
+ *`Second public release of package at 0.0.2`*
+
+ * **Week 8 (July 21 - July 27)**
+
+ Classes for graph database connections using Neo4j are created.
+
+ * **Week 9 (July 28 - August 3)**
+
+ Implementation of basic graph operations to store memories as graphs.
+
+ * **Week 10 (August 4 - August 10)**
+
+ Continuation of the work done in Week 9.
+
+ Mentor Reviews are requested.
+
+ * **Week 11 (August 11 - August 17)**
+
+ The former half of the week acts as a buffer period in case any issues are confronted.
+
+ Documentation is enhanced in the buffer period if no issues arise.
+
+ Milestone #4 is delivered.
+
+ *`Third public release of the package at 0.0.3`*
+
+ * **Week 12 (August 18 - August 24)**
+
+ Final checks are made, and any supporting documents (such as example markdown files) are written.
+
+ The project Report is written and all tracking issues are labelled appropriately.
+
+* **Final Week (August 25 - September 1)**
+ * The final project and the report are submitted to the mentors and on the GSoC portal.
diff --git a/doc/proposals/2025/gsoc/images/API_EXPLORER_WORKFLOW.png b/doc/proposals/2025/gsoc/images/API_EXPLORER_WORKFLOW.png
new file mode 100644
index 00000000..d6175e21
Binary files /dev/null and b/doc/proposals/2025/gsoc/images/API_EXPLORER_WORKFLOW.png differ
diff --git a/doc/proposals/2025/gsoc/images/API_Explorer_Main.png b/doc/proposals/2025/gsoc/images/API_Explorer_Main.png
new file mode 100644
index 00000000..c802a50f
Binary files /dev/null and b/doc/proposals/2025/gsoc/images/API_Explorer_Main.png differ
diff --git a/doc/proposals/2025/gsoc/images/API_Explorer_Template.png b/doc/proposals/2025/gsoc/images/API_Explorer_Template.png
new file mode 100644
index 00000000..4c967b73
Binary files /dev/null and b/doc/proposals/2025/gsoc/images/API_Explorer_Template.png differ
diff --git a/doc/proposals/2025/gsoc/images/chart.png b/doc/proposals/2025/gsoc/images/chart.png
new file mode 100644
index 00000000..588f5f25
Binary files /dev/null and b/doc/proposals/2025/gsoc/images/chart.png differ
diff --git a/doc/proposals/2025/gsoc/images/data_table.png b/doc/proposals/2025/gsoc/images/data_table.png
new file mode 100644
index 00000000..763cf43b
Binary files /dev/null and b/doc/proposals/2025/gsoc/images/data_table.png differ
diff --git a/doc/proposals/2025/gsoc/images/oauth1.png b/doc/proposals/2025/gsoc/images/oauth1.png
new file mode 100644
index 00000000..66d07fca
Binary files /dev/null and b/doc/proposals/2025/gsoc/images/oauth1.png differ
diff --git a/doc/proposals/2025/gsoc/templates/gsoc_application_template.md b/doc/proposals/2025/gsoc/templates/gsoc_application_template.md
new file mode 100644
index 00000000..3018cd60
--- /dev/null
+++ b/doc/proposals/2025/gsoc/templates/gsoc_application_template.md
@@ -0,0 +1,49 @@
+## Instructions
+
+- Create a fork of API Dash.
+- In the folder [doc/proposals/2025/gsoc](https://github.com/foss42/apidash/tree/main/doc/proposals/2025/gsoc) create a file named `application__.md`
+
+The file should contain the follow:
+
+```
+### About
+
+1. Full Name
+3. Contact info (email, phone, etc.)
+6. Discord handle
+7. Home page (if any)
+8. Blog (if any)
+9. GitHub profile link
+10. Twitter, LinkedIn, other socials
+11. Time zone
+12. Link to a resume (PDF, publicly accessible via link and not behind any login-wall)
+
+### University Info
+
+1. University name
+2. Program you are enrolled in (Degree & Major/Minor)
+3. Year
+5. Expected graduation date
+
+### Motivation & Past Experience
+
+Short answers to the following questions (Add relevant links wherever you can):
+1. Have you worked on or contributed to a FOSS project before? Can you attach repo links or relevant PRs?
+2. What is your one project/achievement that you are most proud of? Why?
+3. What kind of problems or challenges motivate you the most to solve them?
+4. Will you be working on GSoC full-time? In case not, what will you be studying or working on while working on the project?
+6. Do you mind regularly syncing up with the project mentors?
+7. What interests you the most about API Dash?
+8. Can you mention some areas where the project can be improved?
+
+### Project Proposal Information
+
+1. Proposal Title
+2. Abstract: A brief summary about the problem that you will be tackling & how.
+3. Detailed Description
+4. Weekly Timeline: A rough week-wise timeline of activities that you would undertake.
+
+```
+
+- Feel free to add images by adding it to the `images` folder inside [doc/proposals/2025/gsoc](https://github.com/foss42/apidash/tree/main/doc/proposals/2025/gsoc) and linking it to your doc.
+- Finally, send your application as a PR for review.
diff --git a/doc/proposals/2025/gsoc/templates/initial_idea_template.md b/doc/proposals/2025/gsoc/templates/initial_idea_template.md
new file mode 100644
index 00000000..9f311fcd
--- /dev/null
+++ b/doc/proposals/2025/gsoc/templates/initial_idea_template.md
@@ -0,0 +1,26 @@
+## Instructions
+
+- Create a fork of API Dash.
+- In the folder [doc/proposals/2025/gsoc](https://github.com/foss42/apidash/tree/main/doc/proposals/2025/gsoc) create a file named `idea__.md`
+
+The file should contain the following:
+
+```
+### Initial Idea Submission
+
+Full Name:
+University name:
+Program you are enrolled in (Degree & Major/Minor):
+Year:
+Expected graduation date:
+
+Project Title:
+Relevant issues:
+
+Idea description:
+
+
+```
+
+- Feel free to add images by adding it to the `images` folder inside [doc/proposals/2025/gsoc](https://github.com/foss42/apidash/tree/main/doc/proposals/2025/gsoc) and linking it to your doc.
+- Finally, send you changes as a PR for review.
diff --git a/doc/user_guide/instructions_to_run_generated_code.md b/doc/user_guide/instructions_to_run_generated_code.md
index ab20a2e5..41085943 100644
--- a/doc/user_guide/instructions_to_run_generated_code.md
+++ b/doc/user_guide/instructions_to_run_generated_code.md
@@ -268,15 +268,100 @@ Here are the detailed instructions for running the generated API Dash code in **
## Go (net/http)
-TODO
+### 1. Install Go compiler
+
+- Windows and MacOS: check out the [official source](https://go.dev/doc/install)
+- Linux: Install from your distro's package manager.
+
+Verify if go is installed:
+
+```bash
+go version
+```
+
+### 2. Create a project
+```bash
+go mod init example.com/api
+```
+
+### 3. Run the generated code
+- Paste the generated code into `main.go`.
+- Build and run by `go run main.go`.
## JavaScript (axios)
-TODO
+The generated api code can be run in browser by using the code in an html file as demonstrated below:
+
+### 1. Create the html file with generated code
+
+Create a new file `index.html`
+
+```html
+
+
+
+ Axios Example
+
+
+
+
+
+
+
+```
+
+Make sure to paste the generated js code from api dash under the `
+