How to Inference
Lumo-8B-Instruct is currently published to the HuggingFace Hub, from where it can be downloaded/used by any user locally or on inferenced on a server. Lumo is not yet published as a chatbot for mass use, however it can be run locally.
The information on how to run it locally is shared below.

STEP 1: Install Ollama
Windows Installation
Download: Go to the Ollama website and download the Windows installer (
OllamaSetup.exe).Install: Double-click the downloaded file and follow the installation prompts.
Verify: Open Command Prompt and run:
macOS Installation
Download: Visit the Ollama website and download the macOS installer.
Install: Open the downloaded file and follow the instructions.
Verify: Open Terminal and run:
Linux Installation
Open Terminal.
Run Command:
Verify: Run:
STEP 2: Initiate the model on Ollama
Run the command on your terminal:
or

The first time you run the command, it may take some time depending on the network speed. It will only take time the first time you run the command.
STEP 3: Start Conversing
Feel free to ask Lumo anything about the ecosystem, even code!


Last updated