You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
lasseedfast 6c807c8744 first commit 10 months ago
_llm first commit 10 months ago
.gitignore first commit 10 months ago
LICENSE first commit 10 months ago
README.md first commit 10 months ago
setup.py first commit 10 months ago

README.md

_llm

A Python package for interacting with LLM models through Ollama, supporting both remote API and local Ollama instances.

Installation

Install directly from GitHub:

pip install git+https://github.com/lasseedfast/_llm.git

Or clone and install for development:

git clone https://github.com/lasseedfast/_llm.git
cd _llm
pip install -e .

Dependencies

This package requires:

  • env_manager: pip install git+https://github.com/lasseedfast/env_manager.git
  • colorprinter: pip install git+https://github.com/lasseedfast/colorprinter.git
  • ollama: For local model inference
  • tiktoken: For token counting
  • requests: For API communication

Environment Variables

The package requires several environment variables to be set:

  • LLM_API_URL: URL of the Ollama API
  • LLM_API_USER: Username for API authentication
  • LLM_API_PWD_LASSE: Password for API authentication
  • LLM_MODEL: Standard model name
  • LLM_MODEL_SMALL: Small model name
  • LLM_MODEL_VISION: Vision model name
  • LLM_MODEL_LARGE: Large context model name
  • LLM_MODEL_REASONING: Reasoning model name
  • LLM_MODEL_TOOLS: Tools model name

These can be set in a .env file in your project directory or in the ArangoDB environment document in the div database.

Basic Usage

from _llm import LLM

# Initialize the LLM
llm = LLM()

# Generate a response
result = llm.generate(
    query="I want to add 2 and 2",
)
print(result.content)

Advanced Usage

Working with Images

from _llm import LLM

llm = LLM()
response = llm.generate(
    query="What's in this image?",
    images=["path/to/image.jpg"],
    model="vision"
)

Streaming Responses

from _llm import LLM

llm = LLM()
for chunk_type, chunk in llm.generate(
    query="Write a paragraph about AI",
    stream=True
):
    print(f"{chunk_type}: {chunk}")

Using Async API

import asyncio
from _llm import LLM

async def main():
    llm = LLM()
    response = await llm.async_generate(
        query="What is machine learning?",
        model="standard"
    )
    print(response)

asyncio.run(main())

License

MIT