@ -172,6 +172,8 @@ A Streamlit example is provided in `example_streamlit_app.py` to demonstrate how
- `use_ollama(self, model)`: Configures the class to use Ollama for generating responses.
- `async generate(self, prompt)`: Asynchronously generates a response based on the provided prompt.
**Note:** The `num_ctx` parameter is set to 20000 by default, which may not be sufficient for all use cases. Adjust this value based on your specific requirements.
## Contributing
Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.