A Node.js CLI that uses Ollama and LM Studio models (Llava, Gemma, Llama etc.) to intelligently rename files by their contents
-
Updated
Feb 9, 2025 - JavaScript
A Node.js CLI that uses Ollama and LM Studio models (Llava, Gemma, Llama etc.) to intelligently rename files by their contents
visionOS examples ⸺ Spatial Computing Accelerators for Apple Vision Pro
Efficient visual programming for AI language models
LLMX; Easiest 3rd party Local LLM UI for the web!
MESH-API (previously MESH-AI) — Off-Grid AI & API Router with over 30 API extensions for Meshtastic & MeshCore - Seamlessly connect LM Studio, Ollama, AI Providers , 3rd-party APIs, & Home Assistant to your LoRa mesh. Supports custom commands, Twilio SMS, Discord channel routing, & GPS emergency alerts via SMS, email, or Discord + SO MUCH MORE
The Deepseek API wrapper for Delphi leverages Deepseek’s advanced models to deliver powerful capabilities for seamless and dynamic conversational interactions, including a model optimized for reasoning, and now also supports running local models through an LM Studio server.
The GenAI API wrapper for Delphi seamlessly integrates OpenAI’s latest models (gpt-5 serie), delivering robust support for agent chats/responses, text generation, vision, audio analysis, JSON configuration, web search, asynchronous operations, and video (SORA-2, SORA-2-pro). Image generation with gpt-image-1.
M-Courtyard: Local AI Model Fine-tuning Assistant for Apple Silicon. Zero-code, zero-cloud, privacy-first desktop app powered by Tauri + React + mlx-lm.
Serverless single HTML page access to an OpenAI API compatible Local LLM
Convert Word docs to Markdown privately - 100% offline, no uploads. Perfect for processing sensitive documents with Ollama, LM Studio, GPT4All & other local AI tools. Just double-click the standalone/word-to-markdown.html file to use.
Soupy is a Discord bot that uses Flux, and LM Studio. It chats and functions as an image generator for your users, and has other fun features.
Flash weight streaming for MLX: run massive models larger than your RAM on Apple Silicon.
Your offline AI coding assistant in the terminal using ollama and LM studio
Ollama Client – Chat with Local LLMs Inside Your Browser A lightweight, privacy‑first Chrome extension to chat with local LLMs via Ollama, LM Studio, and llama.cpp. Supports streaming, stop/regenerate, RAG, and easy model switching — all without cloud APIs or data leaks.
A local browser automation agent based on Microsoft Fara-7B model optimized for LM Studio inference.
Add a description, image, and links to the lm-studio topic page so that developers can more easily learn about it.
To associate your repository with the lm-studio topic, visit your repo's landing page and select "manage topics."