4.1 KiB
GSoC 2025 Proposal - API Dash
Applicant: Roshini D
📌 About
- Full Name: Roshini D
- Contact Info: +91 9449761000
- Email: drroshini16@gmail.com
- Discord: rosh09068
- GitHub: roshcheeku
- LinkedIn: Roshini D
- Time Zone: GMT +5:30 (IST)
- Resume: View Resume
🎓 University Info
- University: BGS College of Engineering and Technology
- Program: Bachelor of Engineering, Computer Science and Design
- Year: Pre-final Year (3rd Year)
- Expected Graduation: May 2026
💡 Motivation & Past Experience
-
FOSS Contributions:
Yes, contributed to GSSoC (Oct–Nov 2024):- Tested API endpoints
- Implemented authentication systems
- Authored API documentation
-
Proud Project:
Stock Sentiment Analysis and Prediction (95% accuracy)- Combined LSTM deep learning
- Integrated technical indicators and social media sentiment analysis
-
Motivating Challenges:
Real-world AI/ML problems with tangible impact, especially integrating multiple data sources and technologies -
Availability for GSoC:
Will balance with Bachelor's program but can dedicate substantial time to GSoC -
Mentor Sync Preference:
Yes, I welcome regular communication for project alignment -
Interest in API Dash:
The mission of simplifying API evaluation resonates with my background in API testing and AI integration
🔧 Ideas to Improve API Dash
- Automated batch testing
- Version comparison functionality
- Community benchmark repository
- Framework adapters for easier integration
- Include cost and latency metrics
- A/B testing capabilities
🧠 Project Proposal: AI API Evaluation Framework
A robust and extensible platform to evaluate and compare AI APIs from providers such as OpenAI, Google, Anthropic, and others.
📦 Core Components
-
Configuration Interface
UI for input datasets, API credentials, request configs, and metrics -
Evaluation Engine
Processes requests, collects responses, computes metrics -
Benchmark Repository
Stores standard benchmarks for AI tasks, with support for custom benchmarks -
Visualization Dashboard
Charts and tables for comparison results -
Batch Processing System
Enables sequential/parallel evaluations
🔧 Technical Implementation
- Adapters for major AI APIs
- Flexible scoring system
- Caching mechanisms
- Exportable reports (PDF, CSV, JSON)
- Extensible architecture
👩💻 User Experience Focus
- Intuitive end-to-end workflow
- Clear, comparative visualizations
- Granular evaluation metric breakdowns
- Easy sharing and exporting
🗓️ Weekly Timeline
Week 1–2: Research and Planning
- Study existing frameworks and benchmarks
- Define architecture and specs
- Project setup and wireframes
Week 3–4: Core Framework Development
- Implement base evaluation engine
- API adapter interfaces
- Data models and testing infra
Week 5–6: API Provider Integration
- Adapters for OpenAI, Google, Anthropic
- Auth handling, error/retry mechanisms
- Test API connectivity and parsing
Week 7–8: Benchmark Implementation
- Text and sentiment benchmarks
- Image-based benchmarks
- Interface for custom benchmarks
Week 9–10: UI Development
- Config interface
- Dashboard visualizations
- Batch UI and settings
Week 11–12: Visualization and Reporting
- Comparison graphs
- Drill-down result views
- Export and share features
Week 13–14: Testing and Optimization
- Test across APIs and benchmarks
- Performance tuning and caching
- Bug fixing
Week 15: Final Polish
- Write user & developer docs
- UI polish and final optimizations
- Final presentation and submission