How to Inference

Lumo-8B-Instruct is currently published to the HuggingFace Hub, from where it can be downloaded/used by any user locally or on inferenced on a server. Lumo is not yet published as a chatbot for mass use, however it can be run locally.

The information on how to run it locally is shared below.

You can try out the Lumo-8B-Instruct live on https://try-lumo8b.lumolabs.ai

STEP 1: Install Ollama

Windows Installation

  1. Download: Go to the Ollama website and download the Windows installer (OllamaSetup.exe).

  2. Install: Double-click the downloaded file and follow the installation prompts.

  3. Verify: Open Command Prompt and run:

macOS Installation

  1. Download: Visit the Ollama website and download the macOS installer.

  2. Install: Open the downloaded file and follow the instructions.

  3. Verify: Open Terminal and run:

Linux Installation

  1. Open Terminal.

  2. Run Command:

  3. Verify: Run:


STEP 2: Initiate the model on Ollama

Run the command on your terminal:

or

The first time you run the command, it may take some time depending on the network speed. It will only take time the first time you run the command.


STEP 3: Start Conversing

Feel free to ask Lumo anything about the ecosystem, even code!

Last updated