Skip to content

feat: MCP server UX improvements, batch, and spec-based flow creation#12205

Merged
ogabrielluiz merged 63 commits intorelease-1.9.0from
feat/mcp-server-client
Apr 2, 2026
Merged

feat: MCP server UX improvements, batch, and spec-based flow creation#12205
ogabrielluiz merged 63 commits intorelease-1.9.0from
feat/mcp-server-client

Conversation

@ogabrielluiz
Copy link
Copy Markdown
Contributor

@ogabrielluiz ogabrielluiz commented Mar 16, 2026

Summary

MCP server UX improvements, batch operations, spec-based flow creation, streaming, flow builder tools, and flow management for the Langflow assistant.

Tools (28)

Group Tool Description
Auth login Authenticate with credentials
Flow CRUD create_flow, delete_flow, duplicate_flow, rename_flow Create, delete, copy, rename flows
Flow Spec create_flow_from_spec, update_flow_from_spec Declarative flow creation/update from text specs
Flow Ops list_flows, get_flow_info, export_flow List, inspect, export (with secret redaction)
Starters list_starter_projects, use_starter_project Browse and use pre-built templates
Components add_component, remove_component, configure_component Add, remove, configure components
Component Info list_components, get_component_info, components Inspect instances or search/describe types (merged)
Discovery search_component_types, describe_component_type Find and describe component types
Connections connect_components, disconnect_components Wire and unwire component ports
Execution run_flow, build_flow, validate_flow Run, build, validate with structured errors
Debugging get_build_results, get_component_output Per-component build data and intermediate outputs
Iteration freeze_component, unfreeze_component, layout_flow_tool Skip re-execution, re-layout
Batch batch Multi-action sequences with $N.field references

Flow Builder Tools (lfx)

Reusable building blocks for any consumer (assistant, MCP, CLI):

  • builder.py -- builds flow dicts from text specs using bundled component registry (no server needed)
  • flow_builder_tools.py -- 9 Langflow components for agent tooling (search, describe, get_field_value, propose_field_edit, add/remove/connect/configure, build_flow)
  • propose_field_edit -- validated JSON Patch generation with dry-run verification
  • flow_to_spec_summary -- compact flow summaries with component IDs for LLM context

Streaming

  • run_flow streams token events via MCP progress notifications
  • Falls back to synchronous POST if streaming yields no result

Response Quality

  • get_flow_info and list_flows include spec_summary with component IDs and connection ports
  • components() merges search + describe in one call
  • validate_flow polls build completion with timeout, returns structured per-component errors
  • export_flow redacts sensitive fields (API keys, passwords) before returning

Bug Fixes

  • Fix param_handler crash when str-typed fields contain Message dicts from chat history
  • Fix redact_template shared reference mutation
  • Fix tool_mode coercion treating None as enabled
  • Fix missing layout_flow in disconnect_components
  • Replace module-level globals with contextvars for session isolation
  • Add action-index context to batch error messages
  • Narrow except Exception to ImportError in flow_graph_repr
  • Disambiguate duplicate component types in graph repr

Tests

  • 69 MCP integration tests (full roundtrip through real Langflow app)
  • 35 flow builder / tools / JSON Patch tests
  • 4 param_handler regression tests

Add flow_builder subpackage with pure functions for manipulating
flow JSON dicts — component ops, edge creation with ReactFlow
handle format, topological layout, and dynamic field detection.
FastMCP server exposing 15 tools across auth, flow, component,
connection, and execution groups. Agents can create flows, add
and configure components, wire connections, and run flows against
a Langflow server through MCP tool calls.
@github-actions github-actions Bot added community Pull Request from an external contributor enhancement New feature or request and removed enhancement New feature or request labels Mar 16, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 17, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: aaa69592-3a89-41c9-a346-638698383413

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
🚥 Pre-merge checks | ✅ 5 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.58% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The PR title describes the main feature additions (MCP server, UX improvements, batch operations, spec-based flow creation) but the raw_summary shows the actual changes focus primarily on MCP server implementation and flow_builder utilities, not UX improvements or batch operations as prominently featured in the title. Clarify whether 'UX improvements' and 'batch' operations are substantive changes in this PR, or refine the title to more accurately reflect the actual deliverables: MCP server implementation, flow builder utilities, and REST API integration.
✅ Passed checks (5 passed)
Check name Status Explanation
Test Coverage For New Implementations ✅ Passed The PR includes three comprehensive test files with 1,408 total lines covering all new implementations including MCP client/server operations, redaction functionality, registry operations, and flow builder utilities.
Test Quality And Coverage ✅ Passed PR includes 143 comprehensive tests across 3 files (1,408 lines) with proper async patterns, behavior validation, error case testing, and real app integration.
Test File Naming And Structure ✅ Passed All three test files follow correct naming conventions (test_*.py), are in appropriate directories with clear categorization, have descriptive test function names, logical organization via test classes, proper pytest fixtures, and comprehensive coverage including positive/negative scenarios and edge cases.
Excessive Mock Usage Warning ✅ Passed Test suite demonstrates minimal and appropriate mock usage across all three test files (143 total tests) with zero mock imports and zero mock usage patterns.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/mcp-server-client

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Mar 17, 2026
@ogabrielluiz ogabrielluiz removed the community Pull Request from an external contributor label Mar 17, 2026
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Mar 17, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (13)
src/lfx/tests/test_flow_builder.py (1)

424-443: Test verifies disconnected nodes get distinct positions but doesn't assert layer assignment.

The test test_layout_disconnected_distinct only verifies that two disconnected nodes have different positions. Given the current implementation assigns disconnected nodes to separate sequential layers, consider adding an assertion about the expected x-coordinates or documenting this behavior explicitly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/tests/test_flow_builder.py` around lines 424 - 443, The test
test_layout_disconnected_distinct currently only asserts positions differ;
update it to also assert the expected layer/x-coordinate behavior from
layout_flow by checking that the two nodes have distinct x values corresponding
to sequential layers (e.g., by extracting by_id or positions and asserting
positions[0].x != positions[1].x or a specific offset), or alternatively add a
short comment in the test describing that disconnected nodes are placed on
separate sequential layers by layout_flow so the behavior is documented; refer
to test_layout_disconnected_distinct and layout_flow to make the change.
src/lfx/src/lfx/graph/flow_builder/flow.py (1)

46-49: Substring matching for input/output detection may cause false positives.

The conditions "ChatInput" in node_type and "ChatOutput" in node_type use substring matching, which could incorrectly classify custom components with names like "MyChatInputProcessor" or "NotChatOutput" as inputs/outputs.

Consider using exact matching or prefix/suffix matching if the naming convention is well-defined:

♻️ Suggested alternative (if exact matching is preferred)
-        if "ChatInput" in node_type or "TextInput" in node_type:
-            inputs.append(component_id)
-        if "ChatOutput" in node_type or "TextOutput" in node_type:
-            outputs.append(component_id)
+        if node_type in {"ChatInput", "TextInput"}:
+            inputs.append(component_id)
+        if node_type in {"ChatOutput", "TextOutput"}:
+            outputs.append(component_id)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/src/lfx/graph/flow_builder/flow.py` around lines 46 - 49, The current
substring checks in flow.py that use `"ChatInput" in node_type` and
`"ChatOutput" in node_type` can misclassify components; update the detection
logic to use exact matches or a strict naming rule instead—replace the substring
conditions on node_type with exact equality (e.g., node_type == "ChatInput" /
"TextInput" and node_type == "ChatOutput" / "TextOutput") or, if a naming
convention exists, use startswith/endswith accordingly so only true input/output
component types append the component_id to inputs/outputs; adjust any related
assumptions around node_type parsing in the same function to ensure consistent
behavior.
src/lfx/src/lfx/graph/flow_builder/layout.py (1)

81-86: Disconnected nodes are assigned to separate sequential layers.

Disconnected nodes (those with no edges) are each placed in their own layer (lines 82-86), causing them to spread horizontally. This may be intentional for visual separation, but if the goal is to group all disconnected nodes together, they should share the same next_layer value.

If this is the intended behavior (to visually separate disconnected nodes), the code is correct. Consider adding a brief comment clarifying the design choice.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/src/lfx/graph/flow_builder/layout.py` around lines 81 - 86, The
current loop in layout.py assigns each disconnected node its own incremented
layer (using next_layer inside the for loop over node_ids), which spaces them
across separate layers; to group all disconnected nodes together, set
layers[nid] = next_layer for every nid without incrementing next_layer (i.e.,
move next_layer increment out or remove it), or if the per-node separation is
intentional, add a clarifying comment near the layers/node_ids/next_layer logic
explaining that disconnected nodes are intentionally placed into separate
sequential layers for visual separation.
src/lfx/src/lfx/graph/flow_builder/connect.py (2)

38-41: Consider moving the json import to the top of the file.

The import json inside _custom_stringify is imported on each call. While Python caches imports, placing it at the top of the file is more idiomatic and slightly more efficient.

♻️ Suggested change
 from __future__ import annotations
 
+import json
 from typing import Any
 
 # ... later in _custom_stringify ...
     if isinstance(obj, str):
-        import json
-
         return json.dumps(obj)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/src/lfx/graph/flow_builder/connect.py` around lines 38 - 41, The
local import of json inside _custom_stringify causes repeated imports; move the
import to the module top-level and remove the inline "import json" in the
_custom_stringify function (i.e., ensure json is imported once at the top of
connect.py and update the code path in _custom_stringify that returns
json.dumps(obj) to use that top-level import).

161-166: The keep() filter logic is hard to follow due to double negation.

The keep() function returns True to keep an edge and False to remove it. Line 166 uses bool(target_input and ...) which returns True (keep) when target_input is provided and doesn't match — this is correct but confusing to read.

Consider restructuring for clarity:

♻️ Suggested refactor for readability
     def keep(e: dict) -> bool:
-        if e.get("source") != source_id or e.get("target") != target_id:
-            return True
-        if source_output and e.get("data", {}).get("sourceHandle", {}).get("name") != source_output:
-            return True
-        return bool(target_input and e.get("data", {}).get("targetHandle", {}).get("fieldName") != target_input)
+        # Keep edges that don't match source/target
+        if e.get("source") != source_id or e.get("target") != target_id:
+            return True
+        # Keep if source_output filter is specified but doesn't match
+        if source_output and e.get("data", {}).get("sourceHandle", {}).get("name") != source_output:
+            return True
+        # Keep if target_input filter is specified but doesn't match
+        if target_input and e.get("data", {}).get("targetHandle", {}).get("fieldName") != target_input:
+            return True
+        # Remove this edge (it matches all criteria)
+        return False
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/src/lfx/graph/flow_builder/connect.py` around lines 161 - 166, The
keep() filter in connect.py is hard to read due to double negation; rewrite it
to perform explicit, readable checks against source_id and target_id and return
False when the edge matches the removal criteria and True otherwise. Concretely,
in the keep function, first check if e.get("source") == source_id and
e.get("target") == target_id; if not, return True; then if source_output is
provided and e.get("data", {}).get("sourceHandle", {}).get("name") !=
source_output return True; then if target_input is provided and e.get("data",
{}).get("targetHandle", {}).get("fieldName") != target_input return True;
finally return False (meaning the edge matches and should be removed). Use the
symbols keep, source_id, target_id, source_output, target_input and the data
keys "sourceHandle"/"name" and "targetHandle"/"fieldName" to locate and change
the logic.
src/lfx/src/lfx/graph/flow_builder/component.py (1)

104-107: Non-dict field handling may unintentionally convert field structure.

When a template field is not a dict (line 104-107), the code wraps it as {"value": value}. This changes the field's structure from a primitive to a dict, which could cause issues if other code expects the original primitive type.

However, based on the test registry and typical Langflow templates, fields should always be dicts with metadata. The else branch appears to be a defensive fallback. Consider adding a warning log or raising an error for unexpected field types:

♻️ Optional: Add defensive check
         if isinstance(template[key], dict):
             template[key]["value"] = value
         else:
-            template[key] = {"value": value}
+            # Field should be a dict; convert but log warning
+            import warnings
+            warnings.warn(f"Field '{key}' is not a dict, converting to {{value: ...}}")
+            template[key] = {"value": value}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/lfx/src/lfx/graph/flow_builder/component.py` around lines 104 - 107, The
code currently converts non-dict template fields into {"value": value} which
mutates expected primitive structure; instead, detect unexpected types for
template[key] in the block handling template, log a warning (using the module's
logger or Python's logging) that the field has an unexpected type (include key
and actual type), and skip modifying that field (or raise a ValueError if you
prefer strict behavior); update the else branch that currently wraps the field
so it emits the warning and leaves template[key] unchanged (or raises) to avoid
silently changing the field shape—refer to the variables template, key, value in
this change.
src/backend/base/langflow/agentic/mcp_client/redact.py (2)

9-9: Consider adding common sensitive keywords like credential, auth, and bearer.

The current set covers common cases, but some APIs use variations like credentials, auth_token, or bearer_token that would not be detected.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/backend/base/langflow/agentic/mcp_client/redact.py` at line 9, Update the
SENSITIVE_KEYWORDS set in redact.py to include additional common variants so
tokens are caught (e.g., add "credential", "credentials", "auth", "auth_token",
"bearer", "bearer_token" and any plural/underscore variants you expect); modify
the SENSITIVE_KEYWORDS declaration (currently named SENSITIVE_KEYWORDS) to
include these new strings and ensure matching logic elsewhere that references
SENSITIVE_KEYWORDS continues to work with the expanded set.

21-29: Shallow copy may leak mutations for non-sensitive dict fields.

When value is a dict but not sensitive, line 26 assigns the original dict reference. If the caller later mutates a non-sensitive field's nested dict, the original template is affected. Consider deep-copying or documenting this as intentional behavior.

🛡️ Optional fix for full immutability
 def redact_template(template: dict) -> dict:
     """Return a copy of the template with sensitive field values masked."""
+    import copy
     redacted = {}
     for key, value in template.items():
         if isinstance(value, dict):
             if is_sensitive_field(key) and "value" in value and value["value"]:
                 redacted[key] = {**value, "value": "***REDACTED***"}
             else:
-                redacted[key] = value
+                redacted[key] = copy.deepcopy(value)
         else:
             redacted[key] = value
     return redacted
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/backend/base/langflow/agentic/mcp_client/redact.py` around lines 21 - 29,
The loop in the redaction routine (iterating over template, using
is_sensitive_field) assigns the original dict reference for non-sensitive dict
values which can leak mutations; update the logic in the redact function to copy
nested dicts for non-sensitive fields (e.g., use copy.deepcopy(value) or
value.copy() depending on needed depth) instead of assigning value directly, and
add the necessary import (import copy) if using deepcopy so callers cannot
mutate the original template via returned redacted.
src/backend/base/langflow/agentic/mcp_client/server.py (4)

54-72: Module-level mutable state may cause issues if the server is reused across contexts.

The global _client and _registry are initialized lazily and persist for the process lifetime. This is fine for a single-tenant CLI tool but could cause cross-contamination if the module is imported in a multi-user context.

Consider documenting this limitation or providing a reset mechanism.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/backend/base/langflow/agentic/mcp_client/server.py` around lines 54 - 72,
The module holds process-wide mutable state in _client and _registry via
_get_client and _get_registry which can leak between requests; add a clear/reset
API (e.g., reset_mcp_client and reset_mcp_registry) that sets _client and
_registry to None and callables to reinitialize, and update the FastMCP
initialization/documentation to mention the single-tenant lifetime;
alternatively refactor to accept a client/registry factory or use
request-scoped/context-local storage in places that instantiate LangflowClient
and call load_registry to avoid global persistence.

256-266: Duplicate node lookup logic — consider extracting a helper.

The pattern of iterating nodes to find by component_id appears multiple times (lines 256-261, 379-384). A small helper would reduce duplication.

♻️ Suggested helper extraction
def _find_node(flow: dict, component_id: str) -> dict | None:
    """Find a node in a flow by component ID."""
    for n in flow.get("data", {}).get("nodes", []):
        nid = n.get("data", {}).get("id", n.get("id", ""))
        if nid == component_id:
            return n
    return None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/backend/base/langflow/agentic/mcp_client/server.py` around lines 256 -
266, Extract the repeated node lookup loop that scans flow.get("data",
{}).get("nodes", []) for a node whose id (n.get("data", {}).get("id",
n.get("id", ""))) matches component_id into a small helper (e.g.,
_find_node(flow: dict, component_id: str) -> dict | None) and replace both
occurrences (the blocks around variables node/component_id at the top-level
search and the later search around lines where node is set/broken) with calls to
that helper; keep behavior identical (return None if not found) and preserve the
existing ValueError raise when _find_node returns None.

494-501: disconnect_components raises ValueError when no connections found, but this may be expected behavior.

If an agent tries to disconnect already-disconnected components, raising an error forces error handling. Consider whether returning {"removed_count": 0} would be more ergonomic for agents.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/backend/base/langflow/agentic/mcp_client/server.py` around lines 494 -
501, The current disconnect_components flow raises ValueError when
fb_remove_connection returns 0; instead make it idempotent by returning
{"removed_count": 0} so agents don't have to treat this as an error. Update the
logic in the function (which calls _get_flow, fb_remove_connection, and
_patch_flow) to: call _get_flow and fb_remove_connection, and if removed == 0
simply return {"removed_count": 0} without raising; only call _patch_flow when
removed > 0 and then return the removed_count.

362-367: Import inside function body incurs overhead on every call.

The is_sensitive_field import at line 363 happens on each get_component_info invocation. Moving it to module level improves performance.

♻️ Move import to module level
 from langflow.agentic.mcp_client.registry import (
     describe_component as reg_describe,
 )
 from langflow.agentic.mcp_client.registry import (
     load_registry,
     search_registry,
 )
+from langflow.agentic.mcp_client.redact import is_sensitive_field

Then remove line 363.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/backend/base/langflow/agentic/mcp_client/server.py` around lines 362 -
367, The local import of is_sensitive_field inside get_component_info is causing
unnecessary overhead on every call; move the import to the module level (add
"from langflow.agentic.mcp_client.redact import is_sensitive_field" at
top-of-file) and remove the in-function import line in get_component_info so the
function uses the module-level is_sensitive_field directly.
src/backend/base/langflow/agentic/mcp_client/client.py (1)

115-156: Each login() call creates a new API key on the server — keys accumulate indefinitely.

The docstring notes this behavior, but there's no mechanism to clean up old keys. Over time, this could clutter the user's API key list. Consider either reusing existing keys or documenting cleanup procedures.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/backend/base/langflow/agentic/mcp_client/client.py` around lines 115 -
156, The login() method currently creates a new API key every call which
accumulates keys; update login to first list existing API keys (e.g., GET
self._url("/api_key") using headers from self._headers() after obtaining
access_token) and if a key with name "mcp-client" exists reuse its "api_key"
instead of creating a new one; only POST to create a new key if none found, or
optionally delete old keys (via DELETE self._url(f"/api_key/{id}")) after
creating a fresh key. Locate logic in the async def login(self, username: str,
password: str) function and use self._url, self._headers, self.access_token, and
self.api_key to implement the GET/list and reuse or cleanup flow, preserving
existing error handling for HTTPStatusError, ConnectError, and TimeoutException.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/backend/base/langflow/agentic/mcp_client/client.py`:
- Around line 36-39: The _client method can create multiple httpx.AsyncClient
instances under concurrent calls; fix by adding an asyncio.Lock (e.g.,
self._http_lock initialized in __init__) and wrapping the check-and-create logic
in an async with self._http_lock: block inside _client so only one coroutine can
create and assign self._http; ensure you still return self._http after the
locked section and preserve follow_redirects=True when constructing
httpx.AsyncClient.
- Around line 101-113: The delete() method calls resp.json() unconditionally
which raises JSONDecodeError for 204 No Content responses; update delete() to
detect empty responses (e.g., check resp.status_code == 204 or if not
resp.content) and return a sensible empty value (None or {}) instead of parsing
JSON, otherwise call resp.json() as before; reference the delete() function and
the resp variable when making this change.

In `@src/backend/tests/unit/utils/test_mcp_client.py`:
- Around line 300-304: The current test TestClientInit.test_default_values uses
a weak assertion on client.server_url; update it to assert against the actual
expected value or the environment override: import os in the test and assert
client.server_url == os.getenv("LANGFLOW_SERVER_URL", <expected_default_url>)
(replace <expected_default_url> with the known default used by LangflowClient),
and keep the assertion that client.access_token is None; reference
LangflowClient, client.server_url, and client.access_token when making the
change.

---

Nitpick comments:
In `@src/backend/base/langflow/agentic/mcp_client/client.py`:
- Around line 115-156: The login() method currently creates a new API key every
call which accumulates keys; update login to first list existing API keys (e.g.,
GET self._url("/api_key") using headers from self._headers() after obtaining
access_token) and if a key with name "mcp-client" exists reuse its "api_key"
instead of creating a new one; only POST to create a new key if none found, or
optionally delete old keys (via DELETE self._url(f"/api_key/{id}")) after
creating a fresh key. Locate logic in the async def login(self, username: str,
password: str) function and use self._url, self._headers, self.access_token, and
self.api_key to implement the GET/list and reuse or cleanup flow, preserving
existing error handling for HTTPStatusError, ConnectError, and TimeoutException.

In `@src/backend/base/langflow/agentic/mcp_client/redact.py`:
- Line 9: Update the SENSITIVE_KEYWORDS set in redact.py to include additional
common variants so tokens are caught (e.g., add "credential", "credentials",
"auth", "auth_token", "bearer", "bearer_token" and any plural/underscore
variants you expect); modify the SENSITIVE_KEYWORDS declaration (currently named
SENSITIVE_KEYWORDS) to include these new strings and ensure matching logic
elsewhere that references SENSITIVE_KEYWORDS continues to work with the expanded
set.
- Around line 21-29: The loop in the redaction routine (iterating over template,
using is_sensitive_field) assigns the original dict reference for non-sensitive
dict values which can leak mutations; update the logic in the redact function to
copy nested dicts for non-sensitive fields (e.g., use copy.deepcopy(value) or
value.copy() depending on needed depth) instead of assigning value directly, and
add the necessary import (import copy) if using deepcopy so callers cannot
mutate the original template via returned redacted.

In `@src/backend/base/langflow/agentic/mcp_client/server.py`:
- Around line 54-72: The module holds process-wide mutable state in _client and
_registry via _get_client and _get_registry which can leak between requests; add
a clear/reset API (e.g., reset_mcp_client and reset_mcp_registry) that sets
_client and _registry to None and callables to reinitialize, and update the
FastMCP initialization/documentation to mention the single-tenant lifetime;
alternatively refactor to accept a client/registry factory or use
request-scoped/context-local storage in places that instantiate LangflowClient
and call load_registry to avoid global persistence.
- Around line 256-266: Extract the repeated node lookup loop that scans
flow.get("data", {}).get("nodes", []) for a node whose id (n.get("data",
{}).get("id", n.get("id", ""))) matches component_id into a small helper (e.g.,
_find_node(flow: dict, component_id: str) -> dict | None) and replace both
occurrences (the blocks around variables node/component_id at the top-level
search and the later search around lines where node is set/broken) with calls to
that helper; keep behavior identical (return None if not found) and preserve the
existing ValueError raise when _find_node returns None.
- Around line 494-501: The current disconnect_components flow raises ValueError
when fb_remove_connection returns 0; instead make it idempotent by returning
{"removed_count": 0} so agents don't have to treat this as an error. Update the
logic in the function (which calls _get_flow, fb_remove_connection, and
_patch_flow) to: call _get_flow and fb_remove_connection, and if removed == 0
simply return {"removed_count": 0} without raising; only call _patch_flow when
removed > 0 and then return the removed_count.
- Around line 362-367: The local import of is_sensitive_field inside
get_component_info is causing unnecessary overhead on every call; move the
import to the module level (add "from langflow.agentic.mcp_client.redact import
is_sensitive_field" at top-of-file) and remove the in-function import line in
get_component_info so the function uses the module-level is_sensitive_field
directly.

In `@src/lfx/src/lfx/graph/flow_builder/component.py`:
- Around line 104-107: The code currently converts non-dict template fields into
{"value": value} which mutates expected primitive structure; instead, detect
unexpected types for template[key] in the block handling template, log a warning
(using the module's logger or Python's logging) that the field has an unexpected
type (include key and actual type), and skip modifying that field (or raise a
ValueError if you prefer strict behavior); update the else branch that currently
wraps the field so it emits the warning and leaves template[key] unchanged (or
raises) to avoid silently changing the field shape—refer to the variables
template, key, value in this change.

In `@src/lfx/src/lfx/graph/flow_builder/connect.py`:
- Around line 38-41: The local import of json inside _custom_stringify causes
repeated imports; move the import to the module top-level and remove the inline
"import json" in the _custom_stringify function (i.e., ensure json is imported
once at the top of connect.py and update the code path in _custom_stringify that
returns json.dumps(obj) to use that top-level import).
- Around line 161-166: The keep() filter in connect.py is hard to read due to
double negation; rewrite it to perform explicit, readable checks against
source_id and target_id and return False when the edge matches the removal
criteria and True otherwise. Concretely, in the keep function, first check if
e.get("source") == source_id and e.get("target") == target_id; if not, return
True; then if source_output is provided and e.get("data",
{}).get("sourceHandle", {}).get("name") != source_output return True; then if
target_input is provided and e.get("data", {}).get("targetHandle",
{}).get("fieldName") != target_input return True; finally return False (meaning
the edge matches and should be removed). Use the symbols keep, source_id,
target_id, source_output, target_input and the data keys "sourceHandle"/"name"
and "targetHandle"/"fieldName" to locate and change the logic.

In `@src/lfx/src/lfx/graph/flow_builder/flow.py`:
- Around line 46-49: The current substring checks in flow.py that use
`"ChatInput" in node_type` and `"ChatOutput" in node_type` can misclassify
components; update the detection logic to use exact matches or a strict naming
rule instead—replace the substring conditions on node_type with exact equality
(e.g., node_type == "ChatInput" / "TextInput" and node_type == "ChatOutput" /
"TextOutput") or, if a naming convention exists, use startswith/endswith
accordingly so only true input/output component types append the component_id to
inputs/outputs; adjust any related assumptions around node_type parsing in the
same function to ensure consistent behavior.

In `@src/lfx/src/lfx/graph/flow_builder/layout.py`:
- Around line 81-86: The current loop in layout.py assigns each disconnected
node its own incremented layer (using next_layer inside the for loop over
node_ids), which spaces them across separate layers; to group all disconnected
nodes together, set layers[nid] = next_layer for every nid without incrementing
next_layer (i.e., move next_layer increment out or remove it), or if the
per-node separation is intentional, add a clarifying comment near the
layers/node_ids/next_layer logic explaining that disconnected nodes are
intentionally placed into separate sequential layers for visual separation.

In `@src/lfx/tests/test_flow_builder.py`:
- Around line 424-443: The test test_layout_disconnected_distinct currently only
asserts positions differ; update it to also assert the expected
layer/x-coordinate behavior from layout_flow by checking that the two nodes have
distinct x values corresponding to sequential layers (e.g., by extracting by_id
or positions and asserting positions[0].x != positions[1].x or a specific
offset), or alternatively add a short comment in the test describing that
disconnected nodes are placed on separate sequential layers by layout_flow so
the behavior is documented; refer to test_layout_disconnected_distinct and
layout_flow to make the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 60e2e0f5-a475-4799-a9f4-449ba7f6fbb3

📥 Commits

Reviewing files that changed from the base of the PR and between 8dbcbb0 and 564cb17.

📒 Files selected for processing (16)
  • src/backend/base/langflow/agentic/mcp_client/__init__.py
  • src/backend/base/langflow/agentic/mcp_client/__main__.py
  • src/backend/base/langflow/agentic/mcp_client/client.py
  • src/backend/base/langflow/agentic/mcp_client/redact.py
  • src/backend/base/langflow/agentic/mcp_client/registry.py
  • src/backend/base/langflow/agentic/mcp_client/server.py
  • src/backend/base/langflow/initial_setup/setup.py
  • src/backend/base/pyproject.toml
  • src/backend/tests/unit/api/v1/test_mcp_client_server.py
  • src/backend/tests/unit/utils/test_mcp_client.py
  • src/lfx/src/lfx/graph/flow_builder/__init__.py
  • src/lfx/src/lfx/graph/flow_builder/component.py
  • src/lfx/src/lfx/graph/flow_builder/connect.py
  • src/lfx/src/lfx/graph/flow_builder/flow.py
  • src/lfx/src/lfx/graph/flow_builder/layout.py
  • src/lfx/tests/test_flow_builder.py

Comment thread src/backend/base/langflow/agentic/mcp_client/client.py Outdated
Comment thread src/lfx/src/lfx/mcp/client.py
Comment thread src/backend/tests/unit/utils/test_mcp_client.py
- Add asyncio.Lock to prevent race condition in _client() under
  concurrent access
- Handle 204 No Content responses in delete() instead of calling
  resp.json() on empty body
- Fix weak assertion in test_default_values
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Mar 17, 2026
describe_component_type now shows component_as_tool as an output
for any component with tool_mode-capable outputs. When an agent
connects via component_as_tool, tool_mode is auto-enabled — no
extra step needed.
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Mar 17, 2026
The MCP server has no langflow dependencies — only httpx, mcp,
and lfx.graph.flow_builder. Moving it to lfx.mcp makes it usable
without installing langflow. Entry point: lfx-mcp.
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Mar 17, 2026
- describe_component_type separates advanced fields from core ones
- search_component_types accepts output_type filter
- list_flows accepts query filter and includes ASCII graph repr
- get_flow_info includes ASCII graph repr
- add duplicate_flow tool
- add list_starter_projects tool
@github-actions github-actions Bot added enhancement New feature or request and removed enhancement New feature or request labels Mar 17, 2026
- use_starter_project creates a flow from a starter template by name
  (starter projects aren't fetchable by ID via /flows/)
- Tests for duplicate_flow, starter projects, graph repr, advanced
  fields, and output_type search
ogabrielluiz and others added 14 commits March 26, 2026 10:46
…tion

Accepts a compact text spec with nodes, edges (using real port names),
and config sections. Agents generate a simple string instead of
constructing nested JSON. Tool mode auto-enabled for component_as_tool.

Handles Prompt Template dynamic variables by parsing {var} from
template text and creating input fields. Cleans up flows on failure.
Type coercion for numeric/boolean config values.
build_flow validates flows by building the graph server-side.
create_flow_from_spec accepts a compact text spec with nodes,
edges, and config. Validates by default (optional).

Handles Prompt Template dynamic {variables}, auto-enables tool_mode
for component_as_tool, cleans up on failure, coerces config types.
- Fix test fixture to use contextvars instead of stale module attributes
- Raise ValueError on malformed spec lines instead of silently dropping
- Disambiguate duplicate component types in flow_graph_repr
- Narrow except Exception to ImportError in flow_graph_repr
- Add action-index context to batch error messages
- Fix stale/inaccurate docstrings (group count, "| ", field_name, category, build_flow)
- Mention create_flow_from_spec in MCP instructions
run_flow now consumes Langflow's SSE stream and relays token events
to the MCP client via report_progress. Falls back to a regular POST
if the stream yields no result.
param_handler's str case called unescape_string on list elements without
type checking. On subsequent agent calls, chat history stores Message dicts
in the list, causing 'dict' object has no attribute 'replace'.

Added _coerce_str_value that extracts .text from Message/Data/dict objects.
Added lfx logger to MCP server with streaming fallback warning.
…mmary

- builder.py: builds flow dicts from text specs using local component
  registry with granular error handling per build phase
- flow_builder_tools.py: 9 Langflow components for agent tooling
  (search, describe, get_field_value, propose_field_edit, add_component,
  remove_component, connect_components, configure_component, build_flow)
- propose_field_edit generates validated JSON Patches with dry-run
- flow_to_spec_summary converts flow dicts to compact summaries with IDs
- Module-level event queue for real-time UI updates during streaming
Exposes per-component build data from the vertex_builds table:
- get_build_results: returns all component outputs, validity, and errors
  from the last run -- useful for debugging which component failed
- get_component_output: inspect a specific component's output from the
  last run to trace where the pipeline broke
Response improvements:
- spec_summary (component IDs + connection ports) in get_flow_info/list_flows
- Merged components() tool: search or describe in one call

Flow management tools:
- validate_flow: polls build results with timeout, structured per-component errors
- rename_flow: update name/description
- export_flow: serialize to JSON with sensitive field redaction
- update_flow_from_spec: declarative update with reference validation

Component iteration tools:
- freeze_component / unfreeze_component: skip re-execution during iteration
- layout_flow_tool: re-layout after modifications

Security: export_flow redacts API keys via redact_node before exposing to LLM.
Includes 18 integration tests covering all new tools.
- _utils.py: shared node_id helper (was duplicated in component.py and layout.py)
- spec.py: validate_spec_references extracted from three copies in
  create_flow_from_spec, update_flow_from_spec, and build_flow_from_spec
@erichare
Copy link
Copy Markdown
Collaborator

@ogabrielluiz this looks awesome. I'm still reviewing, but i noticed one of the backend tests is failing:

FAILED src/backend/tests/unit/test_endpoints.py::test_get_all - AssertionError: assert 368 <= 363

Can you double check that?

The mcp_client fixture was accessing mcp_server_module._client and
._registry directly, but these were replaced with contextvars
(_client_var, _shared_client, _set_client, etc.) in the server
module refactor.
# Conflicts:
#	src/backend/tests/unit/api/v1/test_mcp_client_server.py
@ogabrielluiz
Copy link
Copy Markdown
Contributor Author

@ogabrielluiz this looks awesome. I'm still reviewing, but i noticed one of the backend tests is failing:

FAILED src/backend/tests/unit/test_endpoints.py::test_get_all - AssertionError: assert 368 <= 363

Can you double check that?

Fixed! Thanks

@erichare
Copy link
Copy Markdown
Collaborator

@ogabrielluiz I think that this is a different test than from the other branch? This seems to be a count of the endpoints exceeding expectations.

All else looks good, once thats fixed i'll give it approval. I think the frontend test shard 47 is just flaky.

erichare and others added 6 commits March 31, 2026 08:28
- Move flow_builder_tools out of components/ into mcp/ (fixes test_get_all)
- Extract _set_frozen() helper to deduplicate freeze/unfreeze
- Add missing tools to batch _TOOL_MAP
- Fix sensitive field detection to use word-boundary matching
- Unify redaction logic via shared is_sensitive_field()
- Log skipped non-JSON SSE lines in stream_post
- Rebuild component index
Copy link
Copy Markdown
Collaborator

@erichare erichare left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

autofix-ci Bot and others added 3 commits March 31, 2026 17:36
When a real_time_refresh field (e.g. model_name) is configured before
its dependency (e.g. api_key), the server-side refresh fails. Instead
of propagating a raw RuntimeError, the value is saved locally and a
warning is returned telling the agent to set the credential first.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request lgtm This PR has been approved by a maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants