How to Inference

Lumo-8B-Instruct is currently published to the HuggingFace Hub, from where it can be downloaded/used by any user locally or on inferenced on a server. Lumo is not yet published as a chatbot for mass use, however it can be run locally.

The information on how to run it locally is shared below.

You can try out the Lumo-8B-Instruct live on https://try-lumo8b.lumolabs.ai

STEP 1: Install Ollama

Windows Installation

  1. Download: Go to the Ollama website and download the Windows installer (OllamaSetup.exe).

  2. Install: Double-click the downloaded file and follow the installation prompts.

  3. Verify: Open Command Prompt and run:

ollama --version

macOS Installation

  1. Download: Visit the Ollama website and download the macOS installer.

  2. Install: Open the downloaded file and follow the instructions.

  3. Verify: Open Terminal and run:

ollama --version

Linux Installation

  1. Open Terminal.

  2. Run Command:

    curl -fsSL https://ollama.com/install.sh | sh
  3. Verify: Run:

    ollama --version

STEP 2: Initiate the model on Ollama

Run the command on your terminal:

ollama run lumolabs/Lumo-8B-Instruct

or

ollama run hf.co/lumolabs-ai/Lumo-8B-Instruct

The first time you run the command, it may take some time depending on the network speed. It will only take time the first time you run the command.


STEP 3: Start Conversing

Feel free to ask Lumo anything about the ecosystem, even code!

Last updated