How To Run Ollama In Android (Without Root)

How To Run Ollama In Android (Without Root)

posted Originally published at dev.to 2 min read

Yes, you can run Ollama directly on your Android device without needing root access, thanks to the Termux environment and its package manager. This turns your smartphone into a portable powerhouse for running local large language models (LLMs).

Run Ollama In Android

Read Full Post

Prerequisites

Before you start, you'll need two things:

  1. A modern Android device. Performance will heavily depend on your phone's RAM and processor. A device with at least 8 GB of RAM is recommended for a smoother experience.
  2. The Termux application. It's crucial to install it from F-Droid, as the version on the Google Play Store is outdated and no longer maintained.

Installation and Setup Guide

Follow these simple steps to get Ollama up and running.

1. Install and Update Termux

First, download and install Termux from the F-Droid app store. Once installed, open the app and update its core packages to ensure everything is current.

pkg update && pkg upgrade

You will be prompted several times during the update; it's generally safe to answer with the default option, which is often 'Y' or 'N'.

2. Install Ollama

With Termux up to date, installing Ollama is as simple as running a single command. The Ollama package is now available in the official Termux repositories.

pkg install ollama

This command will download and install the Ollama server and command-line interface (CLI) on your device.

3. Run the Ollama Server

Ollama operates as a background server that manages the models. You need to start this server before you can run and chat with an LLM. It's best practice to run this in its own dedicated Termux session.

Open Termux and start the server:

ollama serve

You'll see some log output, indicating the server is running. You need to keep this session open. You can use the termux-wake-lock command before this to prevent your phone from killing the process when it sleeps.

4. Download and Run a Model

Now, open a new Termux session by swiping from the left edge of the screen and tapping "NEW SESSION". In this new terminal, you can interact with the Ollama CLI.

To download and start chatting with a model (for example, Mistral), use the run command:

ollama run mistral

The first time you run this command, it will download the specified model, which might take some time and storage space. Subsequent runs will be much faster. Once downloaded, you can start chatting with the model directly in your terminal!


Important Tips

  • Model Size Matters: Mobile devices have limited resources. For better performance, start with smaller models like Phi-3 Mini, TinyLlama, or Gemma:2b. Running larger models like Llama 3 8B might be slow or crash if your phone doesn't have enough RAM.
  • Storage Space: LLMs are large files, often several gigabytes. Ensure you have enough free storage on your device before downloading models.
  • Keep it Running: Android's battery optimization can be aggressive. Use the termux-wake-lock command in the server session to prevent Termux from being shut down.
If you read this far, tweet to the author to show them you care. Tweet a Thanks

Thanks for the detailed guide, H4Ck3R—super helpful for those wanting to run LLMs on Android. Have you tested which models work best on mid-range devices, balancing performance and usability?

Tiny Llama , it's totaly depend on your mobile specification

More Posts

Using offline AI models for free in your Phyton scripts with Ollama

Andres Alvarez - Mar 5

Android implementation

belizairesmoy72 - Aug 25

Building Multi-Agent like application from scratch without any framework

Ramandeep Singh - Aug 14

Build AI Workflows Visually, Fully Local & Private with Agentic Signal

Code Forge Temple - Aug 20

How to use Builder design Pattern for test data generation in automation testing

Faisal khatri - Oct 14, 2024
chevron_left