Source: Higgsfield Mcp 2026 04 28 (Higgsfield MCP product page — https://higgsfield.ai/mcp)

Higgsfield ships an MCP server that drops their full image-and-video generation stack into Claude (or any MCP-compatible agent) as a custom connector. It’s the conversational, web-first counterpart to the Python SDK / REST API — no API keys, no polling code, no request_id bookkeeping. You add one connector, sign in, and Claude can generate, browse, compare, and iterate on cinematic stills and clips inside the chat.

Key Takeaways

  • Connector URL: https://mcp.higgsfield.ai/mcp. Add it as a custom connector in Claude’s settings (Settings → Connectors → Add custom connector → name “Higgsfield” → paste URL → Connect → sign in with Higgsfield account).
  • No API keys. Auth flows through your existing Higgsfield account on connect — different from the SDK path, which requires HF_KEY or HF_API_KEY+HF_API_SECRET env vars.
  • Supported clients: Claude (web, desktop, Claude Code), OpenClaw, Hermes, NemoClaw — and any MCP-compatible client.^[inferred] The page enumerates these four by name.
  • Image generation: 16+ models from a single connection. Models named on the page: Soul, Nano Banana Pro, Flux, Seedream. Up to 4K output, configurable aspect ratio.
  • Video generation: 17+ models. Models named on the page: Seedance, Kling, Veo, Minimax Hailuo. Controls for duration, aspect ratio, genre, and start/end frames.
  • Asset library is queryable. Claude can search prior generations, display specific results, and use any previous output as a reference for a new generation — Higgsfield’s history becomes part of the agent’s working memory.
  • Product Ads & Videos preset. Drop a URL or upload a photo and the connector returns campaign-ready images and videos. One prompt fans out to multiple formats.
  • 9 curated video presets: UGC, unboxing, product review, hyper motion, TV spot, and more. The agent passes the chosen mode into generation automatically.
  • Soul Characters for cast consistency. Train a character from reference photos in one turn, then generate scenes/lookbooks/videos around that character across the rest of the conversation.
  • Multi-model comparison. “Generate this scene on 4 different models and show me the results” — same prompt, different models, side by side. Pick the winner and iterate.
  • Credits, not API pricing. The MCP rides on your existing Higgsfield plan credits — no separate billing structure.^[inferred from the page’s promise that it works with your existing Higgsfield account; the page itself doesn’t break down credit costs per model.]

Setup (3 steps)

  1. Open Claude → Settings → Connectors.
  2. Add a custom connector, name it “Higgsfield”, and paste the URL https://mcp.higgsfield.ai/mcp.
  3. Connect and sign in with your existing Higgsfield account.

That’s it — no .mcp.json, no environment variables, no polling loop. From the next message onward Claude can call into image generation, video generation, the asset library, and the marketing-studio presets.

What changes vs the API/SDK path

SurfaceAuthProgrammabilityBest for
MCP connector (this article)Sign in once, no keysConversational — Claude picks model + paramsDay-to-day creative work, product ads, character-driven campaigns, single-prompt workflows
Python SDKHF_KEY or key+secret env varsCode — submit / poll / callback / managed request patternsProduction pipelines, scheduled jobs, webhook integration, anything stitched into other systems
Direct RESTAPI key + secretRaw HTTPS to platform.higgsfield.aiLanguages without an SDK, debugging, edge runtimes

Same models, same credit pool, different ergonomics. The MCP is for the agent surface; the SDK is for engineering surfaces. They don’t conflict — connect both if you do both.

Use Cases (per the page)

  • E-Commerce & Product — lifestyle product shots, background swaps, promotional videos.^[inferred] The page lists “E-Commerce & Product” as a use-case card; the elaboration is consistent with the Product Ads preset description but isn’t quoted verbatim.
  • Social Media & Content Creation — scroll-optimized images and short-form videos sized for the major channels.^[inferred]
  • Marketing Agencies — multi-format campaign variations and client-ready assets.^[inferred]
  • Filmmaking — storyboarding, concept art, previsualization, consistent character casting (Soul Characters).^[inferred]
  • Infographics & Visual Data — custom illustrations and supporting imagery.^[inferred]

Three workflow shapes

The page calls out three paths in increasing depth:

  1. Generate one asset in seconds. Describe what you need; Claude picks the model, sets parameters, delivers. Example prompt the page shows: “Generate a cinematic wide shot of a neon-lit Tokyo alley at night.”
  2. Build a visual system from a conversation. Train a Soul Character, generate scenes across locations and styles, produce videos, manage everything from your generation history. Example prompt: “Train a Soul Character from these photos, then generate a 10-image lookbook.”
  3. Compare models side by side. Same prompt across Flux, Soul, Cinema Studio, and Seedream. Pick the winner and iterate. Example prompt: “Generate this scene on 4 different models and show me the results.”

Implementation

  • Tool/Service: Higgsfield MCP server (https://mcp.higgsfield.ai/mcp) — product page at https://higgsfield.ai/mcp.
  • Setup: Three steps above. No API keys; OAuth-style sign-in on connect.^[inferred — the page describes “Connect and sign in” but doesn’t name the auth protocol.]
  • Cost: Existing Higgsfield credits. The page does not publish per-model credit costs; check the Higgsfield Cloud dashboard.
  • Integration notes:
    • Treat the MCP as the conversational surface and reach for the SDK when you need scheduled, headless, or webhook-driven generation.
    • Soul Characters live in the same account — character trained via the MCP shows up in the SDK and vice versa.^[inferred from the shared-account model.]
    • The “Product Ads” preset accepts a URL or an uploaded photo. For DTC clients, this is the lowest-friction entry — drop the product page URL into the conversation.
    • Multi-model side-by-side is a credit multiplier: 4 models × 1 prompt = 4 generations billed.

Positioning vs other AI video tooling

  • vs the REST API + SDK / webhooks surfaces — same backend, different consumer. MCP for humans-in-conversation; SDK for systems.
  • vs HeyGen Avatar V — HeyGen is talking-head-first (avatars, lip-sync, presenters); Higgsfield MCP is generative-image/video-first (scenes, products, characters across stills + clips). Different jobs to be done, often complementary in a campaign.
  • vs HeyGen Studio Automation with Claude Code — that pattern uses Claude Code to script HeyGen jobs. Higgsfield MCP collapses the scripting layer: you talk to Claude and Claude calls Higgsfield directly.
  • vs Remotion / Hyperframes — those are deterministic composition layers (React + HTML). Higgsfield generates the raw clips and stills the composition layer arranges. Common pipeline: Higgsfield MCP → save best generations to disk → Remotion or Hyperframes for sequencing, captions, brand frame.
  • vs video-use — video-use is a Claude Code editing skill (transcripts, cuts, animation sub-agents). It edits footage you already have. Higgsfield MCP creates the footage to feed it.

Open Questions

  • Per-model credit costs. The page promises “your existing plan credits” but does not list cost per Soul / Flux / Seedream / Seedance / Kling / Veo / Minimax Hailuo generation.
  • Rate limits inside MCP. Same pool as REST? Different? Not stated.
  • Tool names exposed by the MCP. The page describes capabilities (image generation, video generation, asset browsing, product ads, presets) but does not enumerate the exact tool names the MCP advertises to clients.
  • Auth protocol. “Connect and sign in” is shown as a step but the page doesn’t specify OAuth vs custom. Worth verifying when wiring into Hermes or another non-Claude client.
  • Resolution/duration ceilings per model. “Up to 4K” for images and “control duration” for video — explicit per-model max not published here.
  • Hermes/OpenClaw/NemoClaw integration shape. All three are named as supported clients on the page; whether they all use the same connector URL or have client-specific setup is not detailed.

Try It

  1. Open Claude → Settings → Connectors → Add custom connector. Name it “Higgsfield”, paste https://mcp.higgsfield.ai/mcp, click Add → Connect → sign in. Total time: ~30 seconds.
  2. Smoke-test with a single image. Try the page’s example prompt verbatim: “Generate a cinematic wide shot of a neon-lit Tokyo alley at night.” This is the cheapest way to confirm the connector is alive and credits flow.
  3. Run the multi-model comparison. “Generate this scene on 4 different models and show me the results.” Worth the credit burn once — you’ll calibrate which model is your default for which job.
  4. Train a Soul Character on a real client. Pick a recurring face/product and run the lookbook prompt: “Train a Soul Character from these photos, then generate a 10-image lookbook.” Soul Characters carry across generations, so the cost is one-time training + cheap reuses.
  5. For a Smile Springs–style dental client: drop the practice’s homepage URL into the Product Ads preset and ask for a 9-format campaign sweep (UGC, unboxing-of-new-patient-welcome-kit, hyper motion, TV spot). One prompt → multi-channel pack — much closer to a marketer’s day than the SDK path.
  6. If you outgrow the conversational surface, graduate the workflow to the SDK + webhooks for headless / scheduled / pipeline integration. The MCP and SDK share the same account and credit pool, so the migration is no-drama.