How to Setup a 100% Local AI Assistant with Telegram + Ollama (OpenClaw Guide)
Tired of paying for ChatGPT Plus or worrying about sensitive data in cloud AI services? OpenClaw (formerly MoldBot) lets you run a private AI assistant entirely on your local machine using Ollama models, with Telegram integration for mobile access. This step-by-step guide shows exactly how to deploy it - no monthly fees or data privacy concerns.
Why Businesses Are Switching to Local AI Assistants
Every month, businesses waste thousands on cloud AI services while risking data leaks through API calls. The moment sensitive customer information hits a third-party server, you lose control - and compliance teams lose sleep. OpenClaw solves this by keeping everything on your hardware.
The recent surge in Mac Mini sales (especially on Facebook Marketplace) directly correlates with OpenClaw's popularity. Entrepreneurs are realizing they can run a fully private 20B parameter AI assistant for less than the annual cost of ChatGPT Enterprise, with no usage limits or privacy concerns.
Key benefit: OpenClaw remembers conversations through local session memory, creating continuity cloud services can't match without storing your data indefinitely on their servers.
OpenClaw vs Cloud AI: Key Differences
Unlike ChatGPT or Gemini, OpenClaw isn't a service - it's a framework that connects to local LLMs through Ollama. This architectural difference creates three major advantages:
- Zero API costs: Once installed, you only pay for electricity
- Complete data isolation: Conversations never leave your network
- Custom agent networks: Create specialized sub-agents for different departments
At 4:32 in the video, you'll see how OpenClaw can simultaneously handle a coding request while checking if a website is online - something that would require multiple API calls with cloud services.
Hardware Requirements for Local LLMs
Your hardware determines which Ollama models you can run effectively. Here's what we recommend:
| Model Size | Minimum Hardware | Performance |
|---|---|---|
| 7B parameters | Mac Mini M2 (16GB RAM) | ~12 tokens/sec |
| 20B parameters | ASUS GX10 (32GB RAM) | ~8 tokens/sec |
| 120B parameters | High-end server (64GB+ RAM) | ~3 tokens/sec |
The sweet spot for most businesses is the 20B parameter GPT-OSS model - large enough for complex tasks but manageable on affordable hardware.
Step 1: Installing OpenClaw
The installation process is surprisingly simple thanks to OpenClaw's one-line installer:
curl -sSL https://opencloud.ai/install.sh | bash At 7:15 in the tutorial, you'll see the complete installation including the security warning (OpenClaw is still beta software). The entire process takes about 3 minutes on a decent connection.
Pro tip: Choose "Quick Start" during onboarding unless you need specific model providers. You can always add OpenAI or Anthropic keys later.
Step 2: Configuring Ollama Models
The magic happens in the opencloud.json file where you define your Ollama connection:
"models": { "provider": "ollama", "base_url": "http://localhost:11434", "model": "gpt-oss-20b", "type": "reasoning", "context_window": 2600 } This configuration tells OpenClaw to use a local Ollama server running the GPT-OSS-20B model. The 2600 token context window allows for moderately complex conversations without losing coherence.
At 12:40 in the video, watch how changing to the 120B parameter model affects response quality and speed.
Step 3: Creating Your Telegram Bot
Telegram integration requires creating a bot through @BotFather:
- Search for @BotFather in Telegram
- Send
/newbotcommand - Follow prompts to name your bot (must end with "bot")
- Copy the API token provided
This token is your secure handshake between OpenClaw and Telegram. Never share it or commit it to public repositories.
Step 4: Pairing Telegram with OpenClaw
With your bot created, pair it with OpenClaw using the pairing code from the dashboard. This process:
- Links your Telegram account to your local OpenClaw instance
- Enables end-to-end encrypted communication
- Sets up session memory for conversation continuity
At 18:30 in the video, you'll see the moment when the Telegram bot first responds using the local Ollama model - a game-changer for mobile productivity.
Step 5: Advanced Customization
OpenClaw's real power comes from its skill system. You can add capabilities like:
- Browser connectivity: Control Chrome/Edge via Playwright
- Real-time search: Brave/Perplexity API integration
- GitHub integration: Automate code management
Each skill installs via npm during setup. The video shows how to add GitHub skills at 15:10, including the configuration needed for private repositories.
Business use case: Create a "Support Agent" sub-agent with browser skills to troubleshoot customer issues by accessing knowledge bases and filling forms automatically.
Watch the Full Tutorial
See the complete installation and configuration process in action, including the moment when the Telegram bot first responds using the local Ollama model (18:30 timestamp). The video also demonstrates switching between 20B and 120B parameter models to compare performance.
Key Takeaways
OpenClaw represents a fundamental shift in how businesses can leverage AI - moving from costly, opaque cloud services to transparent, controllable local implementations. The Telegram integration brings this power to mobile devices without compromising security.
In summary: For less than $1,000 in hardware, you can deploy a private AI assistant that handles sensitive business tasks, remembers conversations locally, and integrates with your team's messaging platforms - with zero ongoing API costs.
Frequently Asked Questions
Common questions about this topic
OpenClaw (formerly MoldBot) is an open-source AI assistant framework that runs locally on your machine, unlike cloud services like ChatGPT. The key difference is privacy - your data never leaves your device.
OpenClaw connects to local LLMs through Ollama and can integrate with messaging apps like Telegram, giving you cloud-like convenience without the privacy risks or API costs. It also allows creating specialized sub-agents that cloud services can't match.
- No monthly subscriptions - runs on your hardware
- Full conversation history control with local session memory
- Customizable agent networks for different use cases
You can run smaller models (7B-20B parameters) on a modern Mac Mini or PC with at least 16GB RAM. For larger 120B parameter models, you'll need a powerful machine like an ASUS GX10 server or high-end desktop with 32GB+ RAM.
ARM-based systems are supported, making newer Macs ideal. The key requirement is having enough memory to load the Ollama model you choose - each billion parameters requires about 1.5GB RAM for decent performance.
- 7B models: Mac Mini M2 (16GB) or equivalent
- 20B models: Workstation with 32GB RAM
- 120B+ models: Server-grade hardware recommended
Yes, OpenClaw supports WhatsApp integration through a similar bot setup process. However, Telegram is often preferred because its bot API is more developer-friendly and doesn't require business verification for certain features.
The setup process is nearly identical - you'll create a WhatsApp Business API account instead of a Telegram bot. The main difference is WhatsApp's stricter policies around automated messaging.
- WhatsApp requires business verification for some features
- Telegram offers more flexibility for development
- Both maintain end-to-end encryption when configured properly
Updating models requires modifying the opencloud.json configuration file. You'll need to stop the OpenClaw gateway, edit the model parameters in the file, then restart the gateway.
The Ollama service must have the new model pulled locally before OpenClaw can use it. This involves running ollama pull [model-name] in your terminal to download the new weights.
- Always back up your configuration before changes
- Model changes require gateway restart
- Verify model compatibility with your hardware
Absolutely. OpenClaw excels for business use because it keeps sensitive data on-premises. Common business applications include internal knowledge assistants, automated system monitoring, and secure document processing.
The ability to create specialized sub-agents makes it adaptable to various departments and workflows. For example, your support team could have an agent trained on help docs, while sales uses one integrated with your CRM.
- HIPAA-compliant healthcare applications
- Legal document review with full confidentiality
- Financial analysis without data leaving your network
While OpenClaw runs locally, the Telegram integration creates an external access point. Always use strong bot tokens and implement the session memory hook to control conversation history.
The project is still in beta, so avoid using it with highly sensitive data without additional security layers. Regular updates to both OpenClaw and Ollama are essential to patch vulnerabilities.
- Use VPN for remote access to your OpenClaw instance
- Regularly rotate Telegram bot tokens
- Monitor Ollama model updates for security patches
Yes, but it requires additional configuration. You'll need to set up search engine API keys (like Brave or Perplexity) in the skills section during installation.
Without these, OpenClaw is limited to the knowledge within its local LLM. The browser connectivity skill also allows for automated web interactions through Playwright for tasks like form filling or data scraping.
- Brave Search API provides ad-free results
- Perplexity API offers summarized answers
- Playwright enables browser automation
GrowwStacks specializes in deploying secure, business-ready OpenClaw implementations. We handle the complete setup - from hardware provisioning to model optimization and multi-platform integrations.
Our team can create custom skills for your specific workflows and train the system on your internal knowledge base. We also provide ongoing maintenance and security updates to keep your private AI assistant running smoothly.
- Free consultation to assess your needs
- Hardware recommendations tailored to your use case
- Custom agent development for department-specific tasks
Ready to Deploy Your Private AI Assistant?
Every day without a local AI solution means more sensitive data leaking to cloud providers and unnecessary API costs. Our team can have your OpenClaw implementation running on optimized hardware in under 48 hours.