Run AI on Your Computer: A Look at LM Studio

8 Min Read

This desktop application for hosting and running Large Language Models locally exhibits some rough edges, yet it remains immediately practical.

LLM, Large Language Model
Credit: BOY ANTHONY – shutterstock.com

Specialized desktop tools for agentic AI simplify the process for users, even those without deep technical skills, to interact with large language models. Rather than coding Python programs and managing models manually, users gain access to an integrated development environment-like interface that provides logged and inspectable interactions with one or more LLMs.

Amazon and Google have launched similar offerings, Kiro and Antigravity respectively, both focusing on AI-assisted code development. These products provide options for running models either locally or utilizing cloud-hosted versions.

LM Studio by Element Labs offers a platform primarily for local execution, serving, and interaction with LLMs. Its design caters more to general conversational use cases rather than specialized code development. While its feature set is still evolving, it’s sufficiently functional for immediate experimentation.

Configuring Your Models

Upon launching LM Studio for the first time, your initial step will be to configure one or more models. A dedicated sidebar button reveals a curated search panel, allowing you to locate models by name or author, and even apply filters based on whether the model fits within your device’s available memory. Each model entry includes details such as its parameter size, general application type, and whether it supports tool integration. For the purpose of this review, I downloaded the following three models:

All model downloads and subsequent management are handled internally by the application, eliminating the need for manual file organization often required with tools like ComfyUI.

LM Studio model selection interface

LM Studio’s model selection interface. While the list is curated by the developers, users can also manually add models by placing them in the application’s designated model directory.

Foundry

Interacting with LLMs

To begin a conversation with an LLM, you must first select and load your desired model into memory using the selector at the top of the window. You also have the option to fine-tune various operational controls, such as whether to attempt loading the entire model into memory, specifying the number of CPU threads for predictions, or offloading model layers to the GPU. Generally, the default settings are adequate for most users.

All model conversations are neatly organized in separate tabs, complete with expandable sections revealing the model’s internal reasoning or tool integrations (further details below). A real-time token counter displays current usage and remaining capacity, offering insight into the potential “cost” of the ongoing dialogue. For working with local documents, such as asking the model to “Analyze this document for clarity,” files can be simply dragged and dropped into the chat. Additionally, the model can be granted access to your local file system via an integration, though this should be exercised with extreme caution and ideally on a system without critical data.

Sample conversation in LM Studio

This image showcases a conversation within LM Studio. Chats are exportable in various formats and feature collapsible sections detailing the model’s internal processes. The right sidebar displays available integrations, all currently inactive.

Foundry

Integrations

LM Studio supports extending agent capabilities by incorporating MCP server applications. By default, it includes only one integration: a JavaScript code sandbox powered by Deno, which allows the model to execute JavaScript or TypeScript code. It would be beneficial to have at least one additional built-in integration, such as for web searching. Nevertheless, I successfully added a Brave search integration with minimal effort.

A significant drawback of LM Studio’s integration system is its entirely manual nature. There is currently no automated method for adding new integrations, nor is there a browsable directory of available plugins. Users must manually modify a mcp.json file to define desired integrations and then provide the corresponding code. While functional, this process is cumbersome and makes this aspect of LM Studio feel underdeveloped. This is a critical area that requires immediate enhancement.

Despite these limitations, the mechanism for integrating MCP servers is thoughtfully designed. Users can disable, enable, add, or modify integrations without needing to close and restart the entire application. Furthermore, you can whitelist how integrations interact with specific conversations or the program as a whole, preventing the need for continuous agent access permissions. (Due to privacy concerns, I chose not to enable this option.)

Enabling Agentic Features via APIs

LM Studio can also function as a model-serving platform, available either through its desktop application or as a headless service. In both configurations, it provides a REST API, allowing you to interact with models and engage in chat, receiving results either synchronously or as a progressive stream. A recently introduced Anthropic-compatible endpoint enables you to integrate Claude Code with LM Studio. This capability makes it possible to incorporate self-hosted models into workflows with code-centric products like Kiro or Antigravity.

Another powerful capability is the use of tools via an API endpoint. Users can develop scripts that interact with the LM Studio API and simultaneously provide their own tools. This facilitates intricate interactions between the model and a tool, serving as a foundation for building custom agentic behaviors.

LM Studio server settings

This illustration displays LM Studio’s internal server configurations. The application offers versatile options for serving models through various industry-standard APIs, and its user interface provides controls for performance optimization and security adjustments.

Foundry

Summary

LM Studio’s intuitive design and user-friendly features establish a solid foundation, yet several critical functionalities are notably absent. Future updates should prioritize the integration of these salient features.

Presently, tool integration remains a largely manual and piecemeal process, lacking a built-in mechanism for browsing and downloading from a curated tools directory. The array of included tools is also quite limited; for instance, a basic web browsing and fetching tool is not provided by default.

A further notable concern is that LM Studio is not fully open source, despite some of its components, such as its command-line tools, being available publicly. While the licensing terms for LM Studio currently permit free usage, there’s no guarantee this will always be the case. Nonetheless, even in its current nascent stage, LM Studio proves valuable for individuals possessing the requisite hardware and expertise to operate models locally.

Artificial IntelligenceSoftware DevelopmentDevelopment ToolsMachine LearningGenerative AI
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *