You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
lasseedfast 17d3335ff6 Enhance LLM class: add 'think' parameter for reasoning models and improve message handling 9 months ago
_llm Enhance LLM class: add 'think' parameter for reasoning models and improve message handling 9 months ago
.gitignore first commit 9 months ago
LICENSE first commit 9 months ago
README.md Refactor package structure: rename to "llm_client", update imports, and enhance setup configuration 9 months ago
__init__.py Enhance LLM class: add 'think' parameter for reasoning models and improve message handling 9 months ago
llm_client.py Refactor package structure: rename to "llm_client", update imports, and enhance setup configuration 9 months ago
setup.py Refactor package structure: rename to "llm_client", update imports, and enhance setup configuration 9 months ago

README.md

llm_client

A Python package for interacting with LLM models through Ollama, supporting both remote API and local Ollama instances.

Installation

Install directly from GitHub:

pip install git+https://github.com/lasseedfast/_llm.git

Or clone and install for development:

git clone https://github.com/lasseedfast/_llm.git
cd _llm
pip install -e .

Dependencies

This package requires:

  • env_manager: pip install git+https://github.com/lasseedfast/env_manager.git
  • colorprinter: pip install git+https://github.com/lasseedfast/colorprinter.git
  • ollama: For local model inference
  • tiktoken: For token counting
  • requests: For API communication

Environment Variables

The package requires several environment variables to be set:

  • LLM_API_URL: URL of the Ollama API
  • LLM_API_USER: Username for API authentication
  • LLM_API_PWD_LASSE: Password for API authentication
  • LLM_MODEL: Standard model name
  • LLM_MODEL_SMALL: Small model name
  • LLM_MODEL_VISION: Vision model name
  • LLM_MODEL_LARGE: Large context model name
  • LLM_MODEL_REASONING: Reasoning model name
  • LLM_MODEL_TOOLS: Tools model name

These can be set in a .env file in your project directory or in the ArangoDB environment document in the div database.

Basic Usage

from llm_client import LLM

# Initialize the LLM
llm = LLM()

# Generate a response
result = llm.generate(
    query="I want to add 2 and 2",
)
print(result.content)

Advanced Usage

Working with Images

from llm_client import LLM

llm = LLM()
response = llm.generate(
    query="What's in this image?",
    images=["path/to/image.jpg"],
    model="vision"
)

Streaming Responses

from llm_client import LLM

llm = LLM()
for chunk_type, chunk in llm.generate(
    query="Write a paragraph about AI",
    stream=True
):
    print(f"{chunk_type}: {chunk}")

Using Async API

import asyncio
from llm_client import LLM

async def main():
    llm = LLM()
    response = await llm.async_generate(
        query="What is machine learning?",
        model="standard"
    )
    print(response)

asyncio.run(main())

License

MIT