Files
apidash/doc/proposals/2025/gsoc/application_roshini_api_dash.md
2025-04-07 19:00:05 +05:30

4.1 KiB
Raw Permalink Blame History

GSoC 2025 Proposal - API Dash

Applicant: Roshini D


📌 About


🎓 University Info

  • University: BGS College of Engineering and Technology
  • Program: Bachelor of Engineering, Computer Science and Design
  • Year: Pre-final Year (3rd Year)
  • Expected Graduation: May 2026

💡 Motivation & Past Experience

  • FOSS Contributions:
    Yes, contributed to GSSoC (OctNov 2024):

    • Tested API endpoints
    • Implemented authentication systems
    • Authored API documentation
  • Proud Project:
    Stock Sentiment Analysis and Prediction (95% accuracy)

    • Combined LSTM deep learning
    • Integrated technical indicators and social media sentiment analysis
  • Motivating Challenges:
    Real-world AI/ML problems with tangible impact, especially integrating multiple data sources and technologies

  • Availability for GSoC:
    Will balance with Bachelor's program but can dedicate substantial time to GSoC

  • Mentor Sync Preference:
    Yes, I welcome regular communication for project alignment

  • Interest in API Dash:
    The mission of simplifying API evaluation resonates with my background in API testing and AI integration


🔧 Ideas to Improve API Dash

  • Automated batch testing
  • Version comparison functionality
  • Community benchmark repository
  • Framework adapters for easier integration
  • Include cost and latency metrics
  • A/B testing capabilities

🧠 Project Proposal: AI API Evaluation Framework

A robust and extensible platform to evaluate and compare AI APIs from providers such as OpenAI, Google, Anthropic, and others.


📦 Core Components

  • Configuration Interface
    UI for input datasets, API credentials, request configs, and metrics

  • Evaluation Engine
    Processes requests, collects responses, computes metrics

  • Benchmark Repository
    Stores standard benchmarks for AI tasks, with support for custom benchmarks

  • Visualization Dashboard
    Charts and tables for comparison results

  • Batch Processing System
    Enables sequential/parallel evaluations


🔧 Technical Implementation

  • Adapters for major AI APIs
  • Flexible scoring system
  • Caching mechanisms
  • Exportable reports (PDF, CSV, JSON)
  • Extensible architecture

👩‍💻 User Experience Focus

  • Intuitive end-to-end workflow
  • Clear, comparative visualizations
  • Granular evaluation metric breakdowns
  • Easy sharing and exporting

🗓️ Weekly Timeline

Week 12: Research and Planning

  • Study existing frameworks and benchmarks
  • Define architecture and specs
  • Project setup and wireframes

Week 34: Core Framework Development

  • Implement base evaluation engine
  • API adapter interfaces
  • Data models and testing infra

Week 56: API Provider Integration

  • Adapters for OpenAI, Google, Anthropic
  • Auth handling, error/retry mechanisms
  • Test API connectivity and parsing

Week 78: Benchmark Implementation

  • Text and sentiment benchmarks
  • Image-based benchmarks
  • Interface for custom benchmarks

Week 910: UI Development

  • Config interface
  • Dashboard visualizations
  • Batch UI and settings

Week 1112: Visualization and Reporting

  • Comparison graphs
  • Drill-down result views
  • Export and share features

Week 1314: Testing and Optimization

  • Test across APIs and benchmarks
  • Performance tuning and caching
  • Bug fixing

Week 15: Final Polish

  • Write user & developer docs
  • UI polish and final optimizations
  • Final presentation and submission