OpenGork
erenjugs/OpenGork
OpenGork positions itself as the 'uncensored' rebellious cousin of OpenClaw, leveraging xAI's Grok model in 'Heretic Mode' to bypass content filtering. It's a privacy-focused wrapper that lets users run unrestricted AI agents either locally via Ollama or through xAI's API.
Why choose OpenGork over OpenClaw?
Quick recommendation layer first, deeper analysis second. Use this before diving into metrics and architecture details.
- Better fit than OpenClaw for shared workspaces, teams, or operations-heavy usage.
- Emphasizes isolation and containment where OpenClaw often prioritizes raw flexibility.
- Still less proven than OpenClaw in maturity, docs depth, or production mileage.
- Heavier operational setup than simpler solo or hobby-grade local agents.
- Still needs careful sandboxing and guardrails before trusted production use.
- Security-sensitive self-hosters
- Builders who want local-first AI workflows
- You only want battle-tested projects with a long public track record
- You just need a personal assistant, not a team workflow layer
- You cannot tolerate elevated execution risk without extra hardening
Limited evidence available. Use the primary sources before making a production decision.
AI decision layer last reviewed Apr 20, 2026. Helpful, but still inference-heavy enough to double-check primary sources.
Source window: GitHub metadata, README, recent commits, latest release, Reddit, Brave search
Community Pulse
Security Radar
How it's evaluated
Isolation from host OS. 10 = Fully virtualized (Docker/Wasm); 1 = Direct local execution.
Safety of external connections. 10 = End-to-end encrypted/Scoped; 1 = Plaintext/Broad access.
Traffic control. 10 = Air-gapped/Offline-first; 1 = Unrestricted internet access.
Privacy level. 10 = Zero telemetry/Zero tracking; 1 = Extensive logging/reporting.
Command safety. 10 = No unsupervised shell; 1 = Raw, unmonitored shell access.
Security radar summary for OpenGork.
- OpenGork: Sandboxing 2 of 10, API Security 4 of 10, Network Isolation 3 of 10, Telemetry Safety 5 of 10, Shell Protection 1 of 10.
Evaluation Scale: 10 = Maximum Safety / 1 = High Risk
Star Growth (2026)
Star history summary.
- OpenGork: 75 recorded points. From 3 stars on 2026-01-01 to 110 on 2026-04-20.
ClawVerse News
Latest articles and global buzz
Trending Mentions
-
OpenGork Free Public Access to Full GORK(Yes, LLMs from X) Model
r/janitorai_refuges Apr 4 -
OpenGork Update new models like Deepseek V3.1 V3.2 GLM4.7 GLM5 and minimax-m2.5
r/janitorai_refuges Apr 5 -
OpenGork Who needs words….
r/aphextwin Apr 10 -
OpenGork [H] EC, Space Marines, Ironjawz, Disciples of Tzeentch [W] PayPal [Loc] Nashville TN
r/miniswap Apr 5
Technical Showdowns
OpenGork is an unauthorized fork/alternative of OpenClaw that specifically targets users seeking uncensored AI interactions. The project's tagline—'UNCENSORED AI AGENT' and 'Heretic Mode'—makes clear its positioning against content moderation. It offers two deployment paths: a fully local option using Ollama with Grok models (100% uncensored, no API limits), and a cloud option via xAI's API (mostly uncensored with some safety filters).
The architecture is relatively simple, implemented primarily in Shell scripts that wrap around either Ollama for local inference or the xAI API for cloud access. This lightweight approach means minimal overhead but also limited built-in security controls. The project explicitly markets itself to 'privacy-focused users' and those wanting 'maximum freedom' from content restrictions.
Unlike the main OpenClaw project which has enterprise-grade plugin systems and security considerations, OpenGork focuses purely on bypassing censorship filters. Recent commits show documentation updates and development tooling additions (Makefile, EditorConfig, TypeScript config), suggesting active maintenance but no official releases yet. The project carries inherent risks given its 'uncensored' positioning and lack of robust sandboxing.