A distributed system that leverages the open-source Ollama project to run LLM inference tasks across multiple nodes using peer-to-peer (P2P) networking, enabling collaborative large language model inference workloads.
CrowdLlama enables distributed AI computing by allowing users to share their computational resources and access distributed AI capabilities through a peer-to-peer network.
Uses Distributed Hash Table (DHT) for peer discovery and network coordination.
Worker nodes advertise GPU capabilities and supported models for distributed inference.
Peer-to-peer networking enables collaborative large language model inference workloads.
Leverages the open-source Ollama project for LLM inference tasks across multiple nodes.
Simple metadata protocol for querying worker information and capabilities.
Planned consumer components for distributed task execution and management.
Start contributing your computational resources or access distributed AI capabilities through the CrowdLlama network.