A real-time visualization tool for comparing token streaming speeds across different LLM providers and models. TokenFlow helps developers and researchers visualize and compare the token generation performance of various language models in an intuitive and visual way.
- 🚀 Real-time token streaming visualization
- 🤔 And that's it!
# Clone the repository
git clone https://github.com/daveschumaker/tokenflow.git
cd tokenflow
You can use any simple HTTP server to serve the project. Here are a few options:
cd src
python -m http.server 8080
# Install http-server globally (one-time setup)
npm install -g http-server
# Then run
cd src
http-server -p 8080
Then open your browser and navigate to either:
http://127.0.0.1:8080
http://localhost:8080
- Start serving index.html using one of the methods above
- Open your browser and navigate to
http://127.0.0.1:8080
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
If you encounter any issues or have questions, please file an issue on the GitHub repository.