oh-my-openagent 增强agent工具

结论

  • reddit评论
  • 编程结果没有比opencode原装的agent强太多
    上下文污染,编程速度降低了
  • 尝试

    回退原版oc再同样强度使用体验一次,被oc原版蠢哭了,还是得omo!!
    推理结果居然是删除 service_discovery.py和service_registry.py
    --------------------------------------------------------
    对任务拆解绝对有问题,等于更为原始,更接近裸模型
  • 用回omo

    omo的最大优势,任务拆解做开发计划非常棒,持续性推动完成一个任务。
    变更	说明
    删除 services.yaml	不再需要独立文件
    新增 services 段	合并到 development.yaml,包含 enabled/host/port/entrypoint
    删除 modules 段	与 services 冗余,已合并
    更新 service_discovery.py	从单文件读取,代码从 107 行精简到 56 行
    更新 service_registry.py	路径改为 development.yaml
    更新 scripts/start.py	移除 load_services_config(),统一从单文件读取
    更新 app_config.py	默认配置名从 default 改为 development
    --------------------------------------------------------
    对任务拆解聪明,等于加了规划设计,离用户意图更近

算力智能调度观察

  • Prometheus处理一次任务,做出开发计划后,他的推理过程只使用用户对话时选择的模型算力
  • 但是由这个开发计划推动展开的开发任务,也就是subagent,会根据配置表规则,自动调度算力无法干预
  • omo的正常使用:向Prometheus输入任务形成计划,/startwork触发Sisyphus调度实施,启动各种子agent
  • 个人认为可以 用任何强算力给Prometheus用,然后配置表全部指向本地算力做实施

安装插件

  • https://github.com/code-yeongyu/oh-my-openagent/blob/dev/docs/guide/installation.md 
  • 安装:粘贴到opencode对话框即可

    Install and configure oh-my-opencode by following the instructions here:
    https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
  • 另一种是指定版本

    bunx oh-my-opencode@3.17.4 install --no-tui --claude=no --gemini=no --copilot=no
    bunx oh-my-opencode@3.17.4 install --no-tui --claude=no --gemini=no --copilot=no --openai=no --opencode-go=no --opencode-zen=no --zai-coding-plan=no --kimi-for-coding=no --vercel-ai-gateway=no --skip-auth
    
    # 检查配置文件,确保包含 "oh-my-openagent"
    cat ~/.config/opencode/opencode.json
    # 运行医生检查工具,确认无遗留兼容性问题
    bunx oh-my-opencode doctor
  • 安装引导opencode交互提问(基于opencode的provider配置本地算力,理论上可纯本地化,还需要验证)

    1. claude 套餐?我是用本地部署的llama.cpp提供的openai协议的api和apikey,目前已经配置在opencode里了
    2. Chatgpt套餐?无
    3. Gemini套餐?无
    4. Copilot套餐?无
    1. OpencodeZen?无
    2. Z.ai?无
    3. Opencode Go?无
    4. Kimi?无
    Perfect! So you're running entirely on local Llama.cpp - very self-sufficient setup! Let me proceed with the installation.
    
    Step 1: Check if OpenCode is installed
    OpenCode 1.4.7 is installed
    
    Step 2: Run the Installer
    The bunx command is not available, so I need to check if Bun or Node.js/npm are installed as alternatives for running the oh-my-opencode installer.
    Node.js and npm are available! Let me try using npx instead
    The npx command seems to be hanging (possibly waiting for input or having network issues). Let me try a different approach - installing bun first
    Excellent! Bun is installed. Now let me source the bashrc and run the installer
    Let me update the PATH directly
    Perfect! Bun 1.3.12 is ready. Now let's install oh-my-opencode
    Typo in my command! Let me fix it
    Installation successful!
    
    Step 3: Verify Setup
    Perfect! The oh-my-openagent@latest plugin is registered. Now let's run the doctor check
    Good! The doctor shows 3 non-critical warnings (optional tools not installed). These won't affect basic functionality.
    Let me check your current configuration
    I see! The config is set to opencode/gpt-5-nano as a fallback since you have no external subscriptions. However, you already have your own Llama.cpp provider configured (v100-llama). Let me check what models you have available
    Perfect! You have two providers configured:
    
    v100-llama/Qwen3.5-27B - Your main model (768K context!)
    4060-lms/Qwen3.5-9B - Smaller/faster model
    Let me update the oh-my-openagent configuration to use your local models instead of the opencode/gpt-5-nano fallback
    会用本地模型,智能映射到各种openagent所需的模型结构配置
    Excellent! Configuration updated to use your local Qwen models:
    Agent/Category	Model	Reasoning
    Sisyphus, Prometheus, Oracle, Atlas, Metis, Momus	v100-llama/Qwen3.5-27B	Main reasoning tasks need the larger model
    Librarian, Explore, Quick tasks	4060-lms/Qwen3.5-9B	Search/retrieval - speed over intelligence
    
    Step 4: Authentication
    Since you're using your own Llama.cpp servers that are already configured in opencode.json, no additional authentication is needed! The plugin will use your existing provider setup.
    
    Congratulations! 🎉
    You have successfully set up oh-my-openagent (the renamed oh-my-opencode)!
    
    Your Setup Summary
    ✅ OpenCode: 1.4.7
    ✅ Plugin: oh-my-openagent@latest  v3.17.4
    ✅ Providers: v100-llama/Qwen3.5-27B, 4060-lms/Qwen3.5-9B
    ✅ Auth: Already configured (local Llama.cpp)
  • 安装之后的变化

    安装完成 > 重启opencode
    1. opencode自带的全都不见了
    	- plan
    	- build
    2. oh-my-openagent的模式全部出来了
    	- Sysphus - Ultraworker
    	- Hephaestus - Deep Agent
    	- Prometheus - Plan Build
    	- Atlas - Plan Executor

升级

bunx oh-my-opencode install

使用介绍

  • 官方文档翻译

    ------------- 四个角色 ---------------
    Sysphus - Ultraworker		西西弗斯,指挥官		他负责规划、委派给专家,并以积极的并行执行推动任务完成。他没有半途而废。
    Hephaestus - Deep Agent		赫淮斯托斯,高级劳工	给他一个目标,他会探索代码库,研究模式,并从头到尾执行,无需手把手。 正统工匠。
    Prometheus - Plan Build		普罗米修斯,规划师		在涉及任何代码之前,它会提问、确定范围,并制定详细计划。
    Atlas - Plan Executor		 
    
    ------------- 指挥官行为 --------------
    当西西弗斯将任务委托给一个子代理人时,它不会选择模型。它会选择一个类别 。该类别会自动映射到正确的模型:
    visual-engineering		Frontend, UI/UX, design
    deep					Autonomous research + execution
    
    quick					Single-file changes, typos
    
    ultrabrain				Hard logic, architecture decisions
    
    -------------- 哈希锚定编辑 ------------
    he harness problem is real. Most agent failures aren't the model. It's the edit tool.
    Inspired by oh-my-pi, we implemented Hashline. Every line the agent reads comes back tagged with a content hash:
    	11#VK| function hello() {
    	22#XJ|   return "world";
    	33#MB| }
    代理通过引用这些标签来编辑。如果文件自上次读取后发生变化,哈希值不匹配,编辑会被拒绝,导致损坏。没有空白重现。没有陈词滥调错误。
    Grok Code Fast 1:6.7%→成功率 68.3%。 仅仅是通过修改工具就能感受到。
  • 安装后引导 - How to Use

    Feeling lazy? Just include ultrawork (or ulw) in your prompt. That's it. The agent figures out the rest.
    
    Need precision? Press Tab to enter Prometheus (Planner) mode, create a work plan through an interview process, then run /start-work to execute it with full orchestration.
    
    Start using it: Just type opencode in your terminal!
  •  安装后引导 - About oh-my-openagent

    This plugin provides:
    
    - Multiple specialized agents (Sisyphus for coding, Prometheus for planning, Oracle for debugging, etc.)
    - Intelligent model routing with fallback chains
    - UltraWork mode - autonomous multi-agent collaboration
    - Background tasks and deep exploration capabilities
    	
    You can read more at: https://github.com/code-yeongyu/oh-my-openagent

本地化

  • 本地算力

    安装过程,自动适配,全部使用了opencode已配置好的本地算力
  • 关闭遥测

    https://github.com/code-yeongyu/oh-my-openagent/blob/dev/README.zh-cn.md
    匿名遥测默认开启,用于帮助提升安装和运行时的可靠性。它使用 PostHog,并采用哈希化的安装标识符,绝不会使用原始主机名,可通过 OMO_SEND_ANONYMOUS_TELEMETRY=0 或 OMO_DISABLE_POSTHOG=1 禁用。
    
    修改环境变量即可
    vi ~/.bashrc
    # oh-my-openagents 关闭遥测
    export OMO_SEND_ANONYMOUS_TELEMETRY=0
    export OMO_DISABLE_POSTHOG=1
    source ~/.bashrc

卸载

  • 删除配置项

    Edit ~/.config/opencode/opencode.json (or opencode.jsonc) and remove either "oh-my-openagent" or the legacy "oh-my-opencode" entry from the plugin array
  • 删除配置文件

    # Remove plugin config files recognized during the compatibility window
    rm -f ~/.config/opencode/oh-my-openagent.jsonc ~/.config/opencode/oh-my-openagent.json \
          ~/.config/opencode/oh-my-opencode.jsonc ~/.config/opencode/oh-my-opencode.json
    
    # Remove project config (if exists)
    rm -f .opencode/oh-my-openagent.jsonc .opencode/oh-my-openagent.json \
          .opencode/oh-my-opencode.jsonc .opencode/oh-my-opencode.json
  • 验证卸载效果

    opencode --version
    # Plugin should no longer be loaded

监控不明请求

主机: 194.124.216.80:443
下载量: 4.56 KB
上传量: 4.88 KB
下载速度: 0.00 B/s
上传速度: 0.00 B/s
链路: Proxy / Auto / 荷兰-NL-4-流量倍率:0.1
规则: Match
进程:
连接时间: 22 分钟前
源地址: 192.168.1.201:60526
目标地址: 34.120.208.123
目标端口: 443
类型: Socks5(tcp)