ReactorAI Installation

Setup ReactorAI and connect to your Ollama models or Gemini on any platform

1. Install Ollama Server (Optional)

If you want to use Ollama models, ReactorAI relies on the Ollama server to run AI models. You can install it locally or use a remote server.

1a. How to Install Ollama and Download Models

📥 Installing Ollama

🖥️ macOS & Linux:

curl -fsSL https://ollama.com/install.sh | sh

🪟 Windows:

Download the installer from ollama.com/download/windows and run it.

🤖 Downloading Popular Models

Essential Models for ReactorAI:

# 🦙 General purpose model (recommended for beginners) ollama pull llama3.2 # 💻 Best for coding tasks ollama pull codellama # ⚡ Fast, lightweight model ollama pull phi3 # 🌍 Great for multilingual tasks ollama pull mistral # 🧠 Advanced reasoning (larger model) ollama pull llama3.1:70b

🔧 Essential Commands

# 📋 List installed models ollama list # 🚀 Start ollama service ollama serve # 💬 Test a model interactively ollama run llama3.2 # 🗑️ Remove a model ollama rm modelname # 📥 Pull specific model version ollama pull llama3.2:3b

💡 Tips for ReactorAI Users

  • Start Small: Begin with llama3.2 or phi3 - they're fast and work great
  • Check System Resources: Larger models (70b+) need 32GB+ RAM
  • Keep Ollama Running: ReactorAI connects to the ollama service on http://localhost:11434
  • Model Switching: Download multiple models and switch between them in ReactorAI

1b. Setup Gemini API (Alternative)

If you prefer to use Google's Gemini models instead of or alongside Ollama:

2. Install ReactorAI on macOS

3. Install ReactorAI on iOS

4. Install ReactorAI on Windows