Back to Articles
API

Vibe Discovery Is Coming

Om MCP brings Diligence, Data Access, Artifacts, Jobs, and Hub workflows into Codex, Claude Code, and any MCP-compatible harness.

OMTX TeamApril 8, 202612 min read
Vibe Discovery Is Coming

Vibe discovery is coming.

Om MCP is the fastest way to pull the core Om workflow into the tooling researchers and builders already live in. Instead of bouncing between tabs, copying API payloads by hand, or writing one-off wrappers for every endpoint, you can connect the hosted Om MCP to Codex, Claude Code, or any MCP-compatible harness and work from one interface.

That means one place to do target research, one place to pull datasets, one place to launch Hub jobs, and one place to retrieve artifacts and inspect job state. The MCP surface is not a toy demo. It is a practical integration layer over the real Om API.

What Om MCP is for

Om MCP is designed for operators who want to move quickly without losing structure. The current tool surface brings together:

  • Diligence: search, gather, crawl, synthesize reports, and run deep diligence on targets and mechanisms.
  • Data Access: inspect the dataset catalog, look up available proteins, and retrieve binder and non-binder dataset exports for a protein.
  • Hub: launch supported model workflows and monitor them without leaving your coding environment.
  • Jobs: track status, wait on completion, export structured results, and retrieve signed artifact URLs.
  • Artifacts: upload bytes, create upload URLs, finalize uploads, and keep the workflow moving when a model or notebook needs external files.
  • System: inspect pricing, credits, and account state from the same interface you use to run the work.

Why use the MCP instead of hitting the API directly?

You still can hit the API directly, and for production systems you often should. The MCP layer is about compression. It gives a coding agent or an operator a tool-native path to ask the right question, call the right endpoint, and keep the result inside the current session instead of breaking flow every time the work crosses an API boundary.

That matters in a few places:

  • Target triage: ask for a target landscape, then immediately launch deeper diligence if the first result is promising.
  • Program review: pull evidence, inspect citations, and then retrieve the exact datasets or model outputs you want to test next.
  • Operational workflows: launch jobs, wait on them, and fetch artifacts without building one-off polling code first.
  • Agentic research: keep Om inside Codex or Claude Code so the agent can reason across research, datasets, and execution tools in one loop.

Common use cases

1. Find out whether a target is worth more of your time

Use Diligence when you want a fast research pass, then deepen only when the result is worth paying attention to. A typical flow looks like this:

  • Search for a target or mechanism.
  • Gather a broader landscape.
  • Run a synthesis or deep diligence workflow when you need an evidence-backed brief.

2. Move from research into real Om data

Once you know which proteins matter, the same MCP session can enumerate datasets and retrieve binder and non-binder dataset exports for a protein. That is useful for ML workflows, prioritization, benchmarking, and internal research reviews where you want to stop talking abstractly and start working with actual data.

3. Launch model workflows without rebuilding the wheel

Hub tools inside Om MCP let you launch supported models and then stay on top of the execution path through the jobs surface. You can inspect status, wait on completion, retrieve results, and pass artifact URLs into downstream steps without bouncing across separate dashboards or scripts.

4. Let your agent use Om as part of the workflow, not as a separate system

This is the real unlock. Once Om is inside the harness, the agent can ask for research, decide whether to pull data, launch the next job, and keep the outputs inside the same working context. That is a much better loop than manually stitching together URLs and request bodies while the rest of your session goes stale.

Setup is simple

Start by getting an API key from omtx.ai/settings/api-keys, then export it locally:

export OM_API_KEY="omtx_..."

Then point your MCP client at the hosted Om MCP:

https://agents.omtx.ai/mcp

That is it. Customers do not need the Om repo, a local Python process, or a hand-managed MCP server just to get started.

Codex setup

The simplest path in Codex is the CLI:

codex mcp add omtx --url https://agents.omtx.ai/mcp --bearer-token-env-var OM_API_KEY

Or add it manually to ~/.codex/config.toml:

[mcp_servers.omtx]
url = "https://agents.omtx.ai/mcp"
bearer_token_env_var = "OM_API_KEY"

Restart Codex after saving the config. Once the server is attached, you can ask it to search a target, retrieve datasets, launch workflows, or inspect jobs directly through the MCP tool surface.

Claude Code setup

You can add it from the CLI:

claude mcp add omtx https://agents.omtx.ai/mcp --transport http --header "x-api-key: $OM_API_KEY"

Or add it to your project’s .mcp.json:

{
  "mcpServers": {
    "omtx": {
      "type": "http",
      "url": "https://agents.omtx.ai/mcp",
      "headers": {
        "x-api-key": "${OM_API_KEY}"
      }
    }
  }
}

Restart Claude Code, then verify the MCP is attached before you start using Om tools in-session.

Any MCP-compatible harness

If your harness can connect to a remote HTTP MCP server and send your Om API key in headers, the same hosted Om MCP works there too. The important point is that the Om tool surface is not tied to one IDE. It is tied to the MCP contract.

How teams are likely to use it in practice

  • Biology / translational teams: evaluate targets, mechanisms, and pathway context before committing time to a program review.
  • ML / data teams: pull binder and non-binder dataset exports directly into training or ranking workflows.
  • Discovery teams: launch Hub models, monitor the jobs, and pull artifacts back into the session for the next decision.
  • Agent builders: wire Om into Codex, Claude Code, or another harness so research and execution happen in the same loop.

What to ask your MCP once it is wired up

A few concrete starting points:

  • Search KRAS and summarize therapeutic strategy plus trial context.
  • Run deep diligence on KEAP1 resistance mechanisms and show the strongest citations.
  • List available proteins in the dataset catalog and show me which ones match EGFR-family work.
  • Give me molecular binders for this protein UUID and tell me what downstream workflow you would run next.
  • Launch a Hub job, wait on completion, and give me the signed artifact URLs.

What this unlocks

The immediate win is speed. The bigger win is continuity. Om MCP makes it much easier to move from “I have a question” to “I have evidence, data, and a live workflow” without switching mental context every few minutes.

That is why this matters. The future workflow is not just better endpoints. It is better interfaces for using those endpoints. Om MCP is the first serious step toward that.

If you want the setup guide, start at Om MCP Quickstart. If you want the best examples of what to do with it, go to Om MCP Use Cases.

Related Articles

Data Data Everywhere
API

Data Data Everywhere

Explore datasets and stream in the data you need through the Om API with subscription-gated access.

May 1, 2025 · 5 min read

Read More
Om Launches Diligence
Diligence

Om Launches Diligence

Diligence pairs our proteome-scale research assistant with pay-as-you-go Om API workflows, so you can evaluate targets in the UI or programmatically.

April 30, 2025 · 6 min read

Read More
Om opens up early access for customers
General

Om opens up early access for customers

We are excited to announce that we are opening up early access for customers to our platform, so that they can generate data points at scale on their proteins of interest.

April 30, 2025 · 6 min read

Read More
Who is Om?
General

Who is Om?

Om was built to solve the data problem everyone shares in therapeutic discovery, so that we can find new medicines faster.

April 30, 2025 · 6 min read

Read More