SLM (Small Language Models)
Status: 🌱
Motivation
Run Small Language Models (SLMs) locally to prioritize privacy, ethical usage, and accessibility. SLMs are lightweight alternatives to large language models, suitable for running on personal hardware while maintaining reasonable performance.
Service
- Repository: git.eher.com.br/EHER/slm
- Public URL: https://slm.eher.com.br (user authentication required)
- Stack: Open WebUI + OpenClaw running locally with SLMs
- Deployment: Self-hosted on internal network with firewall-restricted access
- Security: Isolated network segment with user authentication; access can be requested
- Purpose: Validate and compare local SLM behavior while maintaining strict privacy and security controls
Ethical and Privacy Considerations
- Local Execution: All models run on personal hardware, ensuring data never leaves your environment.
- Open-Source Models: Prioritize models with permissive licenses (e.g., Apache 2.0, MIT).
- Transparency: Document model origins, training data, and limitations.
- Resource Efficiency: SLMs require fewer resources, reducing environmental impact.
Recommended SLMs
- TinyLlama: Lightweight model suitable for basic tasks.
- Phi-2: Small model with strong performance for its size.
- Mistral-7B: Larger but still manageable for consumer hardware.
Starter Points
- Document local SLM catalog and selection criteria.
- Track latency, quality, and hardware/resource tradeoffs.
- Ensure compliance with ethical guidelines and privacy best practices.