Blog My Happy PlaceGemma Lara Savill

Bringing Local LLMs to Android Studio with Ollama

By Gemma Lara Savill
Published at February 17, 2026

Bringing

Ollama has recently introduced support for new models, and I've been eager to try Qwen 3 Coder Next. In this post, I'll go over how I set up Ollama locally and integrated it directly into Android Studio to power my AI-assisted development.

Setting Up Ollama Locally

First, you'll need to have Ollama installed. Once it's running (look for the llama icon in your toolbar), you can manage your models directly from the terminal.

To download a new model, use the pull command:

ollama pull qwen3-coder-next

You can see your installed models with:

ollama list

And run any model in a terminal session for a quick chat or to test its coding capabilities without even opening an IDE:

ollama run qwen3-coder-next

This is one of the best parts of Ollama: once a model is pulled, it's available globally on your system. You don't need a specific IDE to use it; you can pipe text into it, use it for quick scripts, or just keep a dedicated terminal window open for your "local" coding assistant.

For more commands, you can always check the help:

ollama help

You can find more models to try on the Ollama's website

Pro Tip: Keep Your Models Updated

If you have several models and want to keep them all fresh, you can use this handy one-liner to update everything at once:

ollama list | tail -n +2 | awk '{print $1}' | xargs -I {} ollama pull {}

I suggest creating an alias for this in your .zshrc or .bashrc. You can learn how to do this in my Create your own terminal shortcuts on your Mac post. I use ou for "Ollama Update", which saves me from having to remember the full command every time!

alias ou="ollama list | tail -n +2 | awk '{print \$1}' | xargs -I {} ollama pull {}"

Integrating Ollama into Android Studio

Android Studio Panda now supports local model providers. This is a game changer for privacy and offline work.

To set it up:

  1. Go to Android Studio > Settings > Tools > AI > Model Providers
  2. Click on the + sign and choose "Local Provider"
  3. Enter a name of your choice: Ollama
  4. Use Apply or Refresh and you should see your local models as "Available models"

Note: you won't see any models until they have downloaded

You should see Ollama listed with a home icon, indicating it's a local provider. Once enabled, you can select your downloaded models from the dropdown menu in the Gemini chat window.

For more details, you can check the official documentation on using a local model in Android Studio.

The Verdict: Perks and Cons

Why would you want to run a local model instead of just using Gemini?

The Perks:

  • Choice: Use the specific model that works best for your current task (e.g., Qwen for coding).
  • Offline Access: No internet? No problem.
  • Privacy: Your code stays on your machine.
  • Cost: It's completely free to run.

The Cons:

  • Performance: Generally, local models are not yet as powerful as Gemini Pro.
  • Speed: Depending on your hardware, response times can be slower compared to cloud-based APIs.

While Gemini CLI remains my primary tool for complex tasks, having the option to switch to a local Ollama model right inside the IDE provides a level of flexibility that was previously missing. It's an exciting time to be an Android developer!

Useful resources:

© 2026 MyHappyPlace.dev | The happy place of a software engineer

Made with in Cantabria (Spain) Powered by Ailon Webs