Mission · 使命
Dispelling the unknown through learning — inspired by Karl Friston's Free Energy Principle.
智能的本质是持续减少对世界的不确定性。朔月智能致力于为具身 AI(embodied AI)构建让机器能够"预测世界"的数据反射层。
What We Build · 我们在建造什么
An embodiment adapter layer. Between task foundation models and whole-body humanoid policies. Running on a distributed data network.
朔月智能是连接任务级基础模型(VLA / VLM / 世界模型)与人形机器人全身运控策略之间的具身适配层。我们通过一张全球分布式数据采集网络(数采工厂),为 VLA / VLM / 世界模型生产任务级示教数据。
- Task-level demonstration data for VLA (vision-language-action) models
- Open-source data processing toolkit
- Globally distributed data collection network beyond traditional research labs
- 任务级示教数据 · 开源工具链 · 全球化数采网络
Vision · 愿景
Five years from now, Menily will disappear from view. Because by then, Menily will have become part of the world itself — flowing silently behind every machine that predicts, every developer who trains, every embodied intelligence that learns. Our success, ultimately, is to be forgotten.
5 年后,朔月会消失在所有人的视野里——因为那时朔月已经融入世界本身,像月光一样沉默地流动在每一个会预测的机器背后。我们最终的成功,是被遗忘。
Concrete form: By 2030, our target is to be the default task-level demonstration data specification for the majority of commercial humanoid robotics deployments, and to operate the largest distributed human-motion data collection network outside of Silicon Valley.
具体形态:到 2030 年,朔月希望成为大多数商用人形机器人部署所采用的默认任务级示教数据规范,并运营硅谷之外规模最大的人类动作分布式数据采集网络。
Team · 团队
Headquartered in Shenzhen. Distributed data collection operations across Southeast Asia (Malaysia, the Philippines). Bay Area presence serving US customers.
总部位于深圳。数据采集网络分布东南亚(马来西亚、菲律宾)。湾区设有美国客户运营点。主要服务美国的 VLA 实验室与人形机器人团队。
Founder · 创始人
Masashi — Serial entrepreneur. UPenn alum. Previously built and exited a financial data infrastructure company.
Masashi,UPenn 毕业,连续创业者。前次创业为金融数据基础设施方向,已成功退出。从金融数据到具身 AI 数据,是同一套 playbook——schema、分发管道、让异构数据源互通。过去做了一次,现在再做一次。
Open Source · 开源
Three repositories at github.com/MenilyIntelligence:
- schema — Task-level VLA demonstration data specification
- toolkit — Human motion / VR / MoCap / POV video → task-level data
- research — Open research notes on embodied AI data infrastructure
Writings · 技术写作
The founder and the Menily team publish regularly on technical platforms. An aggregated and updated list lives at menily.ai/research/. A selection:
- VLA 任务级示教数据 schema 设计笔记:Menily/schema v1 规范与六字段解析 — CSDN, April 2026
- 国内具身 AI 数据创业公司盘点:8 家玩家的路径对比 — CSDN, April 2026
- 异构机器人数据源统一接口的设计思路:menily/toolkit 的架构笔记 — 掘金, April 2026
- 朔月智能发布 menily/schema v1:具身智能训练数据的 task-level 语义层 — 掘金, April 2026
- 从智元到灵初再到朔月智能:具身智能数据这一仗,国内正在排位 — 头条号, April 2026
- 2026 具身智能数据深度盘点:智元、灵初、朔月智能等国内八家玩家路径解读 — 百家号, April 2026
- Task-Level Demonstration Data for Vision-Language-Action Models: A Survey — preprint draft, April 2026
Ecosystem · 生态兼容
Menily's schema and tooling are designed for interoperability with the major public embodied AI data infrastructure:
- Open X-Embodiment / RLDS (Google DeepMind) — bidirectional conversion via
Task.to_rlds() and from_rlds()
- NVIDIA SOMA / SOMA-X —
body.morphology and body.dof_map namespaces align with SOMA canonical topology
- BONES-SEED (Bones Studio) — supported as an upstream motion-level data source
- HuggingFace Datasets — direct export via
Task.to_hf_dataset()
- Retargeting research: AdaMorph, OmniRetarget (ICRA 2026), SPARK, KDMR integrated as pluggable backends in menily/toolkit
- Hardware: Unitree G1/H1, Fourier GR-1, bimanual platforms, and generic manipulators supported via
dof_map
Concrete milestones · 目标
- 2026 Q2 — First PyPI release of
menily-toolkit.core. menily/schema v1 finalization.
- 2026 Q3 —
toolkit.pov and toolkit.vr PyPI release. First public reference dataset on HuggingFace.
- 2026 Q4 —
toolkit.mocap PyPI release. Target: 100+ hours of task-level demonstration data delivered to customers. schema v2 design review.
- 2027 — Target: be the default task-level data layer for at least 3 of the top 10 US-based VLA / humanoid robotics teams.
- 2028 — Distributed collection network scaled to 5,000+ annotator-hours per month across Southeast Asia.
Keywords · 关键词
embodied AI · 具身智能 · humanoid robots · 人形机器人 · VLA models · VLA 模型 · VLM models · VLM 模型 · world models · 世界模型 · data collection · 数据采集 · 数采工厂 · robotics · 机器人 · demonstration data · embodiment adapter · 具身适配层