LogoLogo
HuggingFace Community$LUMOXTelegram
  • Introduction
  • Roadmap
  • Partnerships and Listings
  • LumoKit: Solana AI Toolkit Framwork
    • Introduction to LumoKit
    • Installation Guide
      • Pre-requisites
      • Environment Configuration
      • Local Installation
  • How to Add Tools
  • Tools
    • Wallet Portfolio tool
    • Token Identification Tool
    • Rugcheck Token Information Tool
    • Fluxbeam Token Price
    • BirdEye Token Trending
    • Birdeye All Time Trades
    • CoinMarketCap Crypto News
    • Crypto.news Memecoin News
    • GeckoTerminal Trending Pump.Fun Tool
    • CoinGecko Global Crypto Data Tool
    • CoinGecko Trending Crypto Tool
    • CoinGecko Exchange Rates Tool
    • CoinGecko Coin Data Tool
    • CoinMarketCap Trending Coins Tool
    • DexScreener Top Boosts Tool
    • DexScreener Token Information
    • Jupiter Token Price
    • Jupiter Token Metadata Tool
    • Solana Send SOL Tool
    • Solana Send SPL Tokens Tool
    • Solana Burn Tokens Tool
    • Jupiter Swap (Buy/Sell) Tool
    • Pump.Fun Launch Coin Tool
  • Lumo-8B-Instruct Model
    • Model Overview
    • Capabilities and Limitations
    • Use Cases
  • Lumo Dataset
    • About Lumo-Iris
    • About Lumo-8B
    • Dataset Preparation
    • Training Metrics
  • Using The Model
    • HuggingFace Hub
    • How to Inference
  • Lumo Community
    • How to Contribute
    • Report Bugs/Issues
Powered by GitBook

Copyright © 2025 Lumo. All Rights Reserved. This software is open-source and licensed under the GNU Affero General Public License (AGPL) v3.0. You are free to redistribute and modify it under the terms of this license.

On this page
  • STEP 1: Install Ollama
  • Windows Installation
  • macOS Installation
  • Linux Installation
  • STEP 2: Initiate the model on Ollama
  • STEP 3: Start Conversing
  1. Using The Model

How to Inference

PreviousHuggingFace HubNextHow to Contribute

Last updated 4 months ago

Lumo-8B-Instruct is currently published to the HuggingFace Hub, from where it can be downloaded/used by any user locally or on inferenced on a server. Lumo is not yet published as a chatbot for mass use, however it can be run locally.

The information on how to run it locally is shared below.

You can try out the Lumo-8B-Instruct live on

STEP 1: Install Ollama

Windows Installation

  1. Install: Double-click the downloaded file and follow the installation prompts.

  2. Verify: Open Command Prompt and run:

ollama --version

macOS Installation

  1. Install: Open the downloaded file and follow the instructions.

  2. Verify: Open Terminal and run:

ollama --version

Linux Installation

  1. Open Terminal.

  2. Run Command:

    curl -fsSL https://ollama.com/install.sh | sh
  3. Verify: Run:

    ollama --version

STEP 2: Initiate the model on Ollama

Run the command on your terminal:

ollama run lumolabs/Lumo-8B-Instruct

or

ollama run hf.co/lumolabs-ai/Lumo-8B-Instruct

The first time you run the command, it may take some time depending on the network speed. It will only take time the first time you run the command.


STEP 3: Start Conversing

Feel free to ask Lumo anything about the ecosystem, even code!

Download: Go to the and download the Windows installer (OllamaSetup.exe).

Download: Visit the and download the macOS installer.

Ollama website
Ollama website
https://try-lumo8b.lumolabs.ai