LM Playground

3.8
311 reviews
10K+
Downloads
Content rating
Everyone
Screenshot image
Screenshot image
Screenshot image
Screenshot image

About this app

LM Playground is designed as a universal platform for experimenting with various types of Large Language Models (LLMs) on your device. It allows users to download different LLMs, load them onto the application, and converse with these models. This setup allows for a hands-on exploration of the capabilities and distinctions between different LLMs, making it an invaluable tool for enthusiasts, researchers, or anyone curious about the advancements in language model technology.

Currently supported models:
• Meta Llama 3.2 1B and 3B
• Qwen 2.5 0.5B and 1.5B
• Google Gemma2 2B and 9B
• Microsoft Phi3.5
• Mistral 7B
• Llama 3.1 8B

This project was built based on llama.cpp project with OpenCL optimization for better performance. The application uses GGUF-format models with Q4KM quantization, which will be saved to the Downloads folder.
Updated on
Jun 18, 2025

Data safety

Safety starts with understanding how developers collect and share your data. Data privacy and security practices may vary based on your use, region, and age. The developer provided this information and may update it over time.
No data shared with third parties
Learn more about how developers declare sharing
No data collected
Learn more about how developers declare collection

Ratings and reviews

3.8
300 reviews
Matt “Dev Ops” Trahan
November 9, 2025
For everyone saying that you can't download a model, it's probably because you don't have enough RAM. On ARM devices the LLM is forced to live in RAM when active. If your device isn't powerful enough it's not going to support a model. just get a PC guys, workstation not gaming, lots of RAM, with a lot of storage however many billion parameters you can just assume that's how many gigabytes you need 7B you're going to need 7-9 gigabytes, depending on quantization. Thanks for attending my TED talk
9 people found this review helpful
Did you find this helpful?
Sam M.
December 6, 2024
Quite good, its not fast on my device but that is because its an 5 year old phone. But the capability to rum small LLMs from ollama on your phone is cool. Being able to load any model from ollama would we nicer, but maybe there are some adjustments needed to run well on the phone. Oh, and no weird permission, just internet, thats nice!!
11 people found this review helpful
Did you find this helpful?
wade heying
November 19, 2024
It works pretty decent. However, after continuing a conversation for a while, it seems to run out of memory or something on my note 20 ultra. It begins to spew nonsense, getting stuck in what seems to be a loop. It would also be great to have the option to load other models.
3 people found this review helpful
Did you find this helpful?

What’s new

Added Copy, Upvote & Downvote buttons to every AI reply — tap Downvote to choose one or more feedback reasons in a panel and send.