A full-stack web application that helps AI developers, students, and companies track, optimize, and visualize token usage across different AI models. Monitor your API consumption, estimate costs, and analyze performance through an intuitive dashboard.
- π User Authentication - Secure JWT-based registration and login system
- π€ Multi-Model Support - Compatible with GPT-3.5, GPT-4, Claude-2, Llama 2, and more
- π Real-time Analytics - Track token usage, costs, and response times
- π° Cost Estimation - Automatic cost calculation based on model pricing
- π Interactive Dashboard - Beautiful charts and visualizations
- πΎ Data Export - Download usage reports as CSV files
- π¨ Modern UI - Glass morphism design with neural network backgrounds
- React.js - User interface library
- Vite - Fast build tool and development server
- Tailwind CSS - Utility-first CSS framework
- Recharts - Interactive charting library
- Axios - HTTP client for API calls
- React Router DOM - Client-side routing
- Lucide React - Modern icons
- Node.js - JavaScript runtime environment
- Express.js - Web application framework
- MongoDB - NoSQL database
- Mongoose - MongoDB object modeling
- JWT - JSON Web Tokens for authentication
- bcryptjs - Password hashing
- CORS - Cross-origin resource sharing
- Hugging Face API - AI model inference
- Custom Token Counter - Token calculation utilities
- Node.js (version 18 or higher)
- MongoDB (local or MongoDB Atlas)
- Hugging Face API account (optional)
-
Clone the repository
git clone <your-repository-url> cd ai-token-tracker
-
Backend Setup
cd backend npm install -
Frontend Setup
cd ../client npm install
-
Backend Environment Variables (
backend/.env)PORT=5000 MONGODB_URI=mongodb://localhost:27017/ai_token_tracker JWT_SECRET=your_super_secret_jwt_key JWT_EXPIRE=30d HUGGING_FACE_API_KEY=your_hugging_face_api_key HF_API_URL=https://api-inference.huggingface.com/models CLIENT_URL=http://localhost:3000
-
Frontend Configuration (
client/vite.config.js)export default { plugins: [react()], server: { port: 3000, proxy: { '/api': { target: 'http://localhost:5000', changeOrigin: true, }, }, }, }
-
Start the Backend Server
cd backend npm run devServer runs on:
http://localhost:5000 -
Start the Frontend Development Server
cd client npm run devApplication runs on:
http://localhost:3000 -
Access the Application
- Open your browser and go to
http://localhost:3000 - Register a new account or login
- Start tracking your AI token usage!
- Open your browser and go to
ai-token-tracker/
βββ client/ # React frontend
β βββ src/
β β βββ components/ # React components
β β β βββ Dashboard.jsx
β β β βββ Login.jsx
β β β βββ Register.jsx
β β β βββ Layout.jsx
β β βββ contexts/ # React contexts
β β β βββ AuthContext.jsx
β β βββ App.jsx # Main App component
β β βββ main.jsx # Application entry point
β βββ public/ # Static files
β βββ package.json # Frontend dependencies
βββ backend/ # Node.js backend
β βββ controllers/ # Route controllers
β β βββ authController.js
β β βββ tokenLogController.js
β β βββ aiController.js
β βββ models/ # Database models
β β βββ User.js
β β βββ TokenLog.js
β βββ routes/ # API routes
β β βββ auth.js
β β βββ logs.js
β β βββ ai.js
β βββ middleware/ # Custom middleware
β β βββ auth.js
β β βββ validation.js
β βββ services/ # Business logic
β β βββ aiService.js
β βββ utils/ # Utility functions
β β βββ tokenCounter.js
β βββ server.js # Server entry point
β βββ package.json # Backend dependencies
βββ README.md # Project documentation
POST /api/auth/register- Create new user accountPOST /api/auth/login- User loginGET /api/auth/me- Get current user profile
GET /api/logs- Get user's token logsGET /api/logs/stats- Get usage statisticsPOST /api/logs- Create new token logGET /api/logs/export- Export logs as CSV
POST /api/ai/process- Process AI promptGET /api/ai/models- Get available AI models
- Prompt Tokens: Count of input tokens sent to AI model
- Completion Tokens: Count of output tokens received from AI model
- Total Tokens: Sum of prompt and completion tokens
- Cost Calculation: Automatic cost estimation based on model pricing
- GPT-3.5 Turbo ($0.002 per 1K tokens)
- GPT-4 ($0.06 per 1K tokens)
- Real-time Stats: Live updating usage statistics
- Interactive Charts: Token usage trends and cost analysis
- Model Distribution: Pie charts showing model usage patterns
- Response Time Tracking: Performance monitoring across requests
- Update the models array in
Dashboard.jsx - Add pricing in
utils/tokenCounter.js - Implement API integration in
services/aiService.js
- Modify Tailwind classes in components
- Update color schemes in
tailwind.config.js - Add new charts using Recharts components
User Model:
- name, email, password, company, subscription type
TokenLog Model:
- user reference, prompt, response, model used
- token counts, estimated cost, response time
- timestamps and status fields
cd client
npm run build
# Deploy dist folder to Vercelcd backend
# Set environment variables in Render dashboard
# Deploy from GitHub repository- Create free cluster on MongoDB Atlas
- Update
MONGODB_URIin environment variables - Whitelist deployment IP addresses
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Hugging Face for providing AI model APIs
- Tailwind CSS for the amazing utility-first CSS framework
- Recharts for beautiful and interactive charts
- React Community for excellent documentation and support
If you have any questions or need help with setup:
- Check the existing GitHub issues
- Create a new issue with detailed description
- Provide steps to reproduce any bugs
β Don't forget to star this repository if you find it helpful!
Built with β€οΈ using the MERN stack