feat: MCP server UX improvements, batch, and spec-based flow creation#12205
feat: MCP server UX improvements, batch, and spec-based flow creation#12205ogabrielluiz merged 63 commits intorelease-1.9.0from
Conversation
Add flow_builder subpackage with pure functions for manipulating flow JSON dicts — component ops, edge creation with ReactFlow handle format, topological layout, and dynamic field detection.
FastMCP server exposing 15 tools across auth, flow, component, connection, and execution groups. Agents can create flows, add and configure components, wire connections, and run flows against a Langflow server through MCP tool calls.
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
🚥 Pre-merge checks | ✅ 5 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (13)
src/lfx/tests/test_flow_builder.py (1)
424-443: Test verifies disconnected nodes get distinct positions but doesn't assert layer assignment.The test
test_layout_disconnected_distinctonly verifies that two disconnected nodes have different positions. Given the current implementation assigns disconnected nodes to separate sequential layers, consider adding an assertion about the expected x-coordinates or documenting this behavior explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lfx/tests/test_flow_builder.py` around lines 424 - 443, The test test_layout_disconnected_distinct currently only asserts positions differ; update it to also assert the expected layer/x-coordinate behavior from layout_flow by checking that the two nodes have distinct x values corresponding to sequential layers (e.g., by extracting by_id or positions and asserting positions[0].x != positions[1].x or a specific offset), or alternatively add a short comment in the test describing that disconnected nodes are placed on separate sequential layers by layout_flow so the behavior is documented; refer to test_layout_disconnected_distinct and layout_flow to make the change.src/lfx/src/lfx/graph/flow_builder/flow.py (1)
46-49: Substring matching for input/output detection may cause false positives.The conditions
"ChatInput" in node_typeand"ChatOutput" in node_typeuse substring matching, which could incorrectly classify custom components with names like"MyChatInputProcessor"or"NotChatOutput"as inputs/outputs.Consider using exact matching or prefix/suffix matching if the naming convention is well-defined:
♻️ Suggested alternative (if exact matching is preferred)
- if "ChatInput" in node_type or "TextInput" in node_type: - inputs.append(component_id) - if "ChatOutput" in node_type or "TextOutput" in node_type: - outputs.append(component_id) + if node_type in {"ChatInput", "TextInput"}: + inputs.append(component_id) + if node_type in {"ChatOutput", "TextOutput"}: + outputs.append(component_id)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lfx/src/lfx/graph/flow_builder/flow.py` around lines 46 - 49, The current substring checks in flow.py that use `"ChatInput" in node_type` and `"ChatOutput" in node_type` can misclassify components; update the detection logic to use exact matches or a strict naming rule instead—replace the substring conditions on node_type with exact equality (e.g., node_type == "ChatInput" / "TextInput" and node_type == "ChatOutput" / "TextOutput") or, if a naming convention exists, use startswith/endswith accordingly so only true input/output component types append the component_id to inputs/outputs; adjust any related assumptions around node_type parsing in the same function to ensure consistent behavior.src/lfx/src/lfx/graph/flow_builder/layout.py (1)
81-86: Disconnected nodes are assigned to separate sequential layers.Disconnected nodes (those with no edges) are each placed in their own layer (lines 82-86), causing them to spread horizontally. This may be intentional for visual separation, but if the goal is to group all disconnected nodes together, they should share the same
next_layervalue.If this is the intended behavior (to visually separate disconnected nodes), the code is correct. Consider adding a brief comment clarifying the design choice.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lfx/src/lfx/graph/flow_builder/layout.py` around lines 81 - 86, The current loop in layout.py assigns each disconnected node its own incremented layer (using next_layer inside the for loop over node_ids), which spaces them across separate layers; to group all disconnected nodes together, set layers[nid] = next_layer for every nid without incrementing next_layer (i.e., move next_layer increment out or remove it), or if the per-node separation is intentional, add a clarifying comment near the layers/node_ids/next_layer logic explaining that disconnected nodes are intentionally placed into separate sequential layers for visual separation.src/lfx/src/lfx/graph/flow_builder/connect.py (2)
38-41: Consider moving thejsonimport to the top of the file.The
import jsoninside_custom_stringifyis imported on each call. While Python caches imports, placing it at the top of the file is more idiomatic and slightly more efficient.♻️ Suggested change
from __future__ import annotations +import json from typing import Any # ... later in _custom_stringify ... if isinstance(obj, str): - import json - return json.dumps(obj)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lfx/src/lfx/graph/flow_builder/connect.py` around lines 38 - 41, The local import of json inside _custom_stringify causes repeated imports; move the import to the module top-level and remove the inline "import json" in the _custom_stringify function (i.e., ensure json is imported once at the top of connect.py and update the code path in _custom_stringify that returns json.dumps(obj) to use that top-level import).
161-166: Thekeep()filter logic is hard to follow due to double negation.The
keep()function returnsTrueto keep an edge andFalseto remove it. Line 166 usesbool(target_input and ...)which returnsTrue(keep) whentarget_inputis provided and doesn't match — this is correct but confusing to read.Consider restructuring for clarity:
♻️ Suggested refactor for readability
def keep(e: dict) -> bool: - if e.get("source") != source_id or e.get("target") != target_id: - return True - if source_output and e.get("data", {}).get("sourceHandle", {}).get("name") != source_output: - return True - return bool(target_input and e.get("data", {}).get("targetHandle", {}).get("fieldName") != target_input) + # Keep edges that don't match source/target + if e.get("source") != source_id or e.get("target") != target_id: + return True + # Keep if source_output filter is specified but doesn't match + if source_output and e.get("data", {}).get("sourceHandle", {}).get("name") != source_output: + return True + # Keep if target_input filter is specified but doesn't match + if target_input and e.get("data", {}).get("targetHandle", {}).get("fieldName") != target_input: + return True + # Remove this edge (it matches all criteria) + return False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lfx/src/lfx/graph/flow_builder/connect.py` around lines 161 - 166, The keep() filter in connect.py is hard to read due to double negation; rewrite it to perform explicit, readable checks against source_id and target_id and return False when the edge matches the removal criteria and True otherwise. Concretely, in the keep function, first check if e.get("source") == source_id and e.get("target") == target_id; if not, return True; then if source_output is provided and e.get("data", {}).get("sourceHandle", {}).get("name") != source_output return True; then if target_input is provided and e.get("data", {}).get("targetHandle", {}).get("fieldName") != target_input return True; finally return False (meaning the edge matches and should be removed). Use the symbols keep, source_id, target_id, source_output, target_input and the data keys "sourceHandle"/"name" and "targetHandle"/"fieldName" to locate and change the logic.src/lfx/src/lfx/graph/flow_builder/component.py (1)
104-107: Non-dict field handling may unintentionally convert field structure.When a template field is not a dict (line 104-107), the code wraps it as
{"value": value}. This changes the field's structure from a primitive to a dict, which could cause issues if other code expects the original primitive type.However, based on the test registry and typical Langflow templates, fields should always be dicts with metadata. The else branch appears to be a defensive fallback. Consider adding a warning log or raising an error for unexpected field types:
♻️ Optional: Add defensive check
if isinstance(template[key], dict): template[key]["value"] = value else: - template[key] = {"value": value} + # Field should be a dict; convert but log warning + import warnings + warnings.warn(f"Field '{key}' is not a dict, converting to {{value: ...}}") + template[key] = {"value": value}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lfx/src/lfx/graph/flow_builder/component.py` around lines 104 - 107, The code currently converts non-dict template fields into {"value": value} which mutates expected primitive structure; instead, detect unexpected types for template[key] in the block handling template, log a warning (using the module's logger or Python's logging) that the field has an unexpected type (include key and actual type), and skip modifying that field (or raise a ValueError if you prefer strict behavior); update the else branch that currently wraps the field so it emits the warning and leaves template[key] unchanged (or raises) to avoid silently changing the field shape—refer to the variables template, key, value in this change.src/backend/base/langflow/agentic/mcp_client/redact.py (2)
9-9: Consider adding common sensitive keywords likecredential,auth, andbearer.The current set covers common cases, but some APIs use variations like
credentials,auth_token, orbearer_tokenthat would not be detected.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/base/langflow/agentic/mcp_client/redact.py` at line 9, Update the SENSITIVE_KEYWORDS set in redact.py to include additional common variants so tokens are caught (e.g., add "credential", "credentials", "auth", "auth_token", "bearer", "bearer_token" and any plural/underscore variants you expect); modify the SENSITIVE_KEYWORDS declaration (currently named SENSITIVE_KEYWORDS) to include these new strings and ensure matching logic elsewhere that references SENSITIVE_KEYWORDS continues to work with the expanded set.
21-29: Shallow copy may leak mutations for non-sensitive dict fields.When
valueis a dict but not sensitive, line 26 assigns the original dict reference. If the caller later mutates a non-sensitive field's nested dict, the original template is affected. Consider deep-copying or documenting this as intentional behavior.🛡️ Optional fix for full immutability
def redact_template(template: dict) -> dict: """Return a copy of the template with sensitive field values masked.""" + import copy redacted = {} for key, value in template.items(): if isinstance(value, dict): if is_sensitive_field(key) and "value" in value and value["value"]: redacted[key] = {**value, "value": "***REDACTED***"} else: - redacted[key] = value + redacted[key] = copy.deepcopy(value) else: redacted[key] = value return redacted🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/base/langflow/agentic/mcp_client/redact.py` around lines 21 - 29, The loop in the redaction routine (iterating over template, using is_sensitive_field) assigns the original dict reference for non-sensitive dict values which can leak mutations; update the logic in the redact function to copy nested dicts for non-sensitive fields (e.g., use copy.deepcopy(value) or value.copy() depending on needed depth) instead of assigning value directly, and add the necessary import (import copy) if using deepcopy so callers cannot mutate the original template via returned redacted.src/backend/base/langflow/agentic/mcp_client/server.py (4)
54-72: Module-level mutable state may cause issues if the server is reused across contexts.The global
_clientand_registryare initialized lazily and persist for the process lifetime. This is fine for a single-tenant CLI tool but could cause cross-contamination if the module is imported in a multi-user context.Consider documenting this limitation or providing a reset mechanism.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/base/langflow/agentic/mcp_client/server.py` around lines 54 - 72, The module holds process-wide mutable state in _client and _registry via _get_client and _get_registry which can leak between requests; add a clear/reset API (e.g., reset_mcp_client and reset_mcp_registry) that sets _client and _registry to None and callables to reinitialize, and update the FastMCP initialization/documentation to mention the single-tenant lifetime; alternatively refactor to accept a client/registry factory or use request-scoped/context-local storage in places that instantiate LangflowClient and call load_registry to avoid global persistence.
256-266: Duplicate node lookup logic — consider extracting a helper.The pattern of iterating nodes to find by
component_idappears multiple times (lines 256-261, 379-384). A small helper would reduce duplication.♻️ Suggested helper extraction
def _find_node(flow: dict, component_id: str) -> dict | None: """Find a node in a flow by component ID.""" for n in flow.get("data", {}).get("nodes", []): nid = n.get("data", {}).get("id", n.get("id", "")) if nid == component_id: return n return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/base/langflow/agentic/mcp_client/server.py` around lines 256 - 266, Extract the repeated node lookup loop that scans flow.get("data", {}).get("nodes", []) for a node whose id (n.get("data", {}).get("id", n.get("id", ""))) matches component_id into a small helper (e.g., _find_node(flow: dict, component_id: str) -> dict | None) and replace both occurrences (the blocks around variables node/component_id at the top-level search and the later search around lines where node is set/broken) with calls to that helper; keep behavior identical (return None if not found) and preserve the existing ValueError raise when _find_node returns None.
494-501:disconnect_componentsraises ValueError when no connections found, but this may be expected behavior.If an agent tries to disconnect already-disconnected components, raising an error forces error handling. Consider whether returning
{"removed_count": 0}would be more ergonomic for agents.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/base/langflow/agentic/mcp_client/server.py` around lines 494 - 501, The current disconnect_components flow raises ValueError when fb_remove_connection returns 0; instead make it idempotent by returning {"removed_count": 0} so agents don't have to treat this as an error. Update the logic in the function (which calls _get_flow, fb_remove_connection, and _patch_flow) to: call _get_flow and fb_remove_connection, and if removed == 0 simply return {"removed_count": 0} without raising; only call _patch_flow when removed > 0 and then return the removed_count.
362-367: Import inside function body incurs overhead on every call.The
is_sensitive_fieldimport at line 363 happens on eachget_component_infoinvocation. Moving it to module level improves performance.♻️ Move import to module level
from langflow.agentic.mcp_client.registry import ( describe_component as reg_describe, ) from langflow.agentic.mcp_client.registry import ( load_registry, search_registry, ) +from langflow.agentic.mcp_client.redact import is_sensitive_fieldThen remove line 363.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/base/langflow/agentic/mcp_client/server.py` around lines 362 - 367, The local import of is_sensitive_field inside get_component_info is causing unnecessary overhead on every call; move the import to the module level (add "from langflow.agentic.mcp_client.redact import is_sensitive_field" at top-of-file) and remove the in-function import line in get_component_info so the function uses the module-level is_sensitive_field directly.src/backend/base/langflow/agentic/mcp_client/client.py (1)
115-156: Each login() call creates a new API key on the server — keys accumulate indefinitely.The docstring notes this behavior, but there's no mechanism to clean up old keys. Over time, this could clutter the user's API key list. Consider either reusing existing keys or documenting cleanup procedures.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend/base/langflow/agentic/mcp_client/client.py` around lines 115 - 156, The login() method currently creates a new API key every call which accumulates keys; update login to first list existing API keys (e.g., GET self._url("/api_key") using headers from self._headers() after obtaining access_token) and if a key with name "mcp-client" exists reuse its "api_key" instead of creating a new one; only POST to create a new key if none found, or optionally delete old keys (via DELETE self._url(f"/api_key/{id}")) after creating a fresh key. Locate logic in the async def login(self, username: str, password: str) function and use self._url, self._headers, self.access_token, and self.api_key to implement the GET/list and reuse or cleanup flow, preserving existing error handling for HTTPStatusError, ConnectError, and TimeoutException.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/backend/base/langflow/agentic/mcp_client/client.py`:
- Around line 36-39: The _client method can create multiple httpx.AsyncClient
instances under concurrent calls; fix by adding an asyncio.Lock (e.g.,
self._http_lock initialized in __init__) and wrapping the check-and-create logic
in an async with self._http_lock: block inside _client so only one coroutine can
create and assign self._http; ensure you still return self._http after the
locked section and preserve follow_redirects=True when constructing
httpx.AsyncClient.
- Around line 101-113: The delete() method calls resp.json() unconditionally
which raises JSONDecodeError for 204 No Content responses; update delete() to
detect empty responses (e.g., check resp.status_code == 204 or if not
resp.content) and return a sensible empty value (None or {}) instead of parsing
JSON, otherwise call resp.json() as before; reference the delete() function and
the resp variable when making this change.
In `@src/backend/tests/unit/utils/test_mcp_client.py`:
- Around line 300-304: The current test TestClientInit.test_default_values uses
a weak assertion on client.server_url; update it to assert against the actual
expected value or the environment override: import os in the test and assert
client.server_url == os.getenv("LANGFLOW_SERVER_URL", <expected_default_url>)
(replace <expected_default_url> with the known default used by LangflowClient),
and keep the assertion that client.access_token is None; reference
LangflowClient, client.server_url, and client.access_token when making the
change.
---
Nitpick comments:
In `@src/backend/base/langflow/agentic/mcp_client/client.py`:
- Around line 115-156: The login() method currently creates a new API key every
call which accumulates keys; update login to first list existing API keys (e.g.,
GET self._url("/api_key") using headers from self._headers() after obtaining
access_token) and if a key with name "mcp-client" exists reuse its "api_key"
instead of creating a new one; only POST to create a new key if none found, or
optionally delete old keys (via DELETE self._url(f"/api_key/{id}")) after
creating a fresh key. Locate logic in the async def login(self, username: str,
password: str) function and use self._url, self._headers, self.access_token, and
self.api_key to implement the GET/list and reuse or cleanup flow, preserving
existing error handling for HTTPStatusError, ConnectError, and TimeoutException.
In `@src/backend/base/langflow/agentic/mcp_client/redact.py`:
- Line 9: Update the SENSITIVE_KEYWORDS set in redact.py to include additional
common variants so tokens are caught (e.g., add "credential", "credentials",
"auth", "auth_token", "bearer", "bearer_token" and any plural/underscore
variants you expect); modify the SENSITIVE_KEYWORDS declaration (currently named
SENSITIVE_KEYWORDS) to include these new strings and ensure matching logic
elsewhere that references SENSITIVE_KEYWORDS continues to work with the expanded
set.
- Around line 21-29: The loop in the redaction routine (iterating over template,
using is_sensitive_field) assigns the original dict reference for non-sensitive
dict values which can leak mutations; update the logic in the redact function to
copy nested dicts for non-sensitive fields (e.g., use copy.deepcopy(value) or
value.copy() depending on needed depth) instead of assigning value directly, and
add the necessary import (import copy) if using deepcopy so callers cannot
mutate the original template via returned redacted.
In `@src/backend/base/langflow/agentic/mcp_client/server.py`:
- Around line 54-72: The module holds process-wide mutable state in _client and
_registry via _get_client and _get_registry which can leak between requests; add
a clear/reset API (e.g., reset_mcp_client and reset_mcp_registry) that sets
_client and _registry to None and callables to reinitialize, and update the
FastMCP initialization/documentation to mention the single-tenant lifetime;
alternatively refactor to accept a client/registry factory or use
request-scoped/context-local storage in places that instantiate LangflowClient
and call load_registry to avoid global persistence.
- Around line 256-266: Extract the repeated node lookup loop that scans
flow.get("data", {}).get("nodes", []) for a node whose id (n.get("data",
{}).get("id", n.get("id", ""))) matches component_id into a small helper (e.g.,
_find_node(flow: dict, component_id: str) -> dict | None) and replace both
occurrences (the blocks around variables node/component_id at the top-level
search and the later search around lines where node is set/broken) with calls to
that helper; keep behavior identical (return None if not found) and preserve the
existing ValueError raise when _find_node returns None.
- Around line 494-501: The current disconnect_components flow raises ValueError
when fb_remove_connection returns 0; instead make it idempotent by returning
{"removed_count": 0} so agents don't have to treat this as an error. Update the
logic in the function (which calls _get_flow, fb_remove_connection, and
_patch_flow) to: call _get_flow and fb_remove_connection, and if removed == 0
simply return {"removed_count": 0} without raising; only call _patch_flow when
removed > 0 and then return the removed_count.
- Around line 362-367: The local import of is_sensitive_field inside
get_component_info is causing unnecessary overhead on every call; move the
import to the module level (add "from langflow.agentic.mcp_client.redact import
is_sensitive_field" at top-of-file) and remove the in-function import line in
get_component_info so the function uses the module-level is_sensitive_field
directly.
In `@src/lfx/src/lfx/graph/flow_builder/component.py`:
- Around line 104-107: The code currently converts non-dict template fields into
{"value": value} which mutates expected primitive structure; instead, detect
unexpected types for template[key] in the block handling template, log a warning
(using the module's logger or Python's logging) that the field has an unexpected
type (include key and actual type), and skip modifying that field (or raise a
ValueError if you prefer strict behavior); update the else branch that currently
wraps the field so it emits the warning and leaves template[key] unchanged (or
raises) to avoid silently changing the field shape—refer to the variables
template, key, value in this change.
In `@src/lfx/src/lfx/graph/flow_builder/connect.py`:
- Around line 38-41: The local import of json inside _custom_stringify causes
repeated imports; move the import to the module top-level and remove the inline
"import json" in the _custom_stringify function (i.e., ensure json is imported
once at the top of connect.py and update the code path in _custom_stringify that
returns json.dumps(obj) to use that top-level import).
- Around line 161-166: The keep() filter in connect.py is hard to read due to
double negation; rewrite it to perform explicit, readable checks against
source_id and target_id and return False when the edge matches the removal
criteria and True otherwise. Concretely, in the keep function, first check if
e.get("source") == source_id and e.get("target") == target_id; if not, return
True; then if source_output is provided and e.get("data",
{}).get("sourceHandle", {}).get("name") != source_output return True; then if
target_input is provided and e.get("data", {}).get("targetHandle",
{}).get("fieldName") != target_input return True; finally return False (meaning
the edge matches and should be removed). Use the symbols keep, source_id,
target_id, source_output, target_input and the data keys "sourceHandle"/"name"
and "targetHandle"/"fieldName" to locate and change the logic.
In `@src/lfx/src/lfx/graph/flow_builder/flow.py`:
- Around line 46-49: The current substring checks in flow.py that use
`"ChatInput" in node_type` and `"ChatOutput" in node_type` can misclassify
components; update the detection logic to use exact matches or a strict naming
rule instead—replace the substring conditions on node_type with exact equality
(e.g., node_type == "ChatInput" / "TextInput" and node_type == "ChatOutput" /
"TextOutput") or, if a naming convention exists, use startswith/endswith
accordingly so only true input/output component types append the component_id to
inputs/outputs; adjust any related assumptions around node_type parsing in the
same function to ensure consistent behavior.
In `@src/lfx/src/lfx/graph/flow_builder/layout.py`:
- Around line 81-86: The current loop in layout.py assigns each disconnected
node its own incremented layer (using next_layer inside the for loop over
node_ids), which spaces them across separate layers; to group all disconnected
nodes together, set layers[nid] = next_layer for every nid without incrementing
next_layer (i.e., move next_layer increment out or remove it), or if the
per-node separation is intentional, add a clarifying comment near the
layers/node_ids/next_layer logic explaining that disconnected nodes are
intentionally placed into separate sequential layers for visual separation.
In `@src/lfx/tests/test_flow_builder.py`:
- Around line 424-443: The test test_layout_disconnected_distinct currently only
asserts positions differ; update it to also assert the expected
layer/x-coordinate behavior from layout_flow by checking that the two nodes have
distinct x values corresponding to sequential layers (e.g., by extracting by_id
or positions and asserting positions[0].x != positions[1].x or a specific
offset), or alternatively add a short comment in the test describing that
disconnected nodes are placed on separate sequential layers by layout_flow so
the behavior is documented; refer to test_layout_disconnected_distinct and
layout_flow to make the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 60e2e0f5-a475-4799-a9f4-449ba7f6fbb3
📒 Files selected for processing (16)
src/backend/base/langflow/agentic/mcp_client/__init__.pysrc/backend/base/langflow/agentic/mcp_client/__main__.pysrc/backend/base/langflow/agentic/mcp_client/client.pysrc/backend/base/langflow/agentic/mcp_client/redact.pysrc/backend/base/langflow/agentic/mcp_client/registry.pysrc/backend/base/langflow/agentic/mcp_client/server.pysrc/backend/base/langflow/initial_setup/setup.pysrc/backend/base/pyproject.tomlsrc/backend/tests/unit/api/v1/test_mcp_client_server.pysrc/backend/tests/unit/utils/test_mcp_client.pysrc/lfx/src/lfx/graph/flow_builder/__init__.pysrc/lfx/src/lfx/graph/flow_builder/component.pysrc/lfx/src/lfx/graph/flow_builder/connect.pysrc/lfx/src/lfx/graph/flow_builder/flow.pysrc/lfx/src/lfx/graph/flow_builder/layout.pysrc/lfx/tests/test_flow_builder.py
- Add asyncio.Lock to prevent race condition in _client() under concurrent access - Handle 204 No Content responses in delete() instead of calling resp.json() on empty body - Fix weak assertion in test_default_values
describe_component_type now shows component_as_tool as an output for any component with tool_mode-capable outputs. When an agent connects via component_as_tool, tool_mode is auto-enabled — no extra step needed.
The MCP server has no langflow dependencies — only httpx, mcp, and lfx.graph.flow_builder. Moving it to lfx.mcp makes it usable without installing langflow. Entry point: lfx-mcp.
- describe_component_type separates advanced fields from core ones - search_component_types accepts output_type filter - list_flows accepts query filter and includes ASCII graph repr - get_flow_info includes ASCII graph repr - add duplicate_flow tool - add list_starter_projects tool
- use_starter_project creates a flow from a starter template by name (starter projects aren't fetchable by ID via /flows/) - Tests for duplicate_flow, starter projects, graph repr, advanced fields, and output_type search
…tion
Accepts a compact text spec with nodes, edges (using real port names),
and config sections. Agents generate a simple string instead of
constructing nested JSON. Tool mode auto-enabled for component_as_tool.
Handles Prompt Template dynamic variables by parsing {var} from
template text and creating input fields. Cleans up flows on failure.
Type coercion for numeric/boolean config values.
build_flow validates flows by building the graph server-side.
create_flow_from_spec accepts a compact text spec with nodes,
edges, and config. Validates by default (optional).
Handles Prompt Template dynamic {variables}, auto-enables tool_mode
for component_as_tool, cleans up on failure, coerces config types.
- Fix test fixture to use contextvars instead of stale module attributes - Raise ValueError on malformed spec lines instead of silently dropping - Disambiguate duplicate component types in flow_graph_repr - Narrow except Exception to ImportError in flow_graph_repr - Add action-index context to batch error messages - Fix stale/inaccurate docstrings (group count, "| ", field_name, category, build_flow) - Mention create_flow_from_spec in MCP instructions
run_flow now consumes Langflow's SSE stream and relays token events to the MCP client via report_progress. Falls back to a regular POST if the stream yields no result.
param_handler's str case called unescape_string on list elements without type checking. On subsequent agent calls, chat history stores Message dicts in the list, causing 'dict' object has no attribute 'replace'. Added _coerce_str_value that extracts .text from Message/Data/dict objects. Added lfx logger to MCP server with streaming fallback warning.
…mmary - builder.py: builds flow dicts from text specs using local component registry with granular error handling per build phase - flow_builder_tools.py: 9 Langflow components for agent tooling (search, describe, get_field_value, propose_field_edit, add_component, remove_component, connect_components, configure_component, build_flow) - propose_field_edit generates validated JSON Patches with dry-run - flow_to_spec_summary converts flow dicts to compact summaries with IDs - Module-level event queue for real-time UI updates during streaming
Exposes per-component build data from the vertex_builds table: - get_build_results: returns all component outputs, validity, and errors from the last run -- useful for debugging which component failed - get_component_output: inspect a specific component's output from the last run to trace where the pipeline broke
Response improvements: - spec_summary (component IDs + connection ports) in get_flow_info/list_flows - Merged components() tool: search or describe in one call Flow management tools: - validate_flow: polls build results with timeout, structured per-component errors - rename_flow: update name/description - export_flow: serialize to JSON with sensitive field redaction - update_flow_from_spec: declarative update with reference validation Component iteration tools: - freeze_component / unfreeze_component: skip re-execution during iteration - layout_flow_tool: re-layout after modifications Security: export_flow redacts API keys via redact_node before exposing to LLM. Includes 18 integration tests covering all new tools.
- _utils.py: shared node_id helper (was duplicated in component.py and layout.py) - spec.py: validate_spec_references extracted from three copies in create_flow_from_spec, update_flow_from_spec, and build_flow_from_spec
…ai/langflow into feat/mcp-server-client
|
@ogabrielluiz this looks awesome. I'm still reviewing, but i noticed one of the backend tests is failing:
Can you double check that? |
The mcp_client fixture was accessing mcp_server_module._client and ._registry directly, but these were replaced with contextvars (_client_var, _shared_client, _set_client, etc.) in the server module refactor.
# Conflicts: # src/backend/tests/unit/api/v1/test_mcp_client_server.py
Fixed! Thanks |
|
@ogabrielluiz I think that this is a different test than from the other branch? This seems to be a count of the endpoints exceeding expectations. All else looks good, once thats fixed i'll give it approval. I think the frontend test shard 47 is just flaky. |
- Move flow_builder_tools out of components/ into mcp/ (fixes test_get_all) - Extract _set_frozen() helper to deduplicate freeze/unfreeze - Add missing tools to batch _TOOL_MAP - Fix sensitive field detection to use word-boundary matching - Unify redaction logic via shared is_sensitive_field() - Log skipped non-JSON SSE lines in stream_post - Rebuild component index
When a real_time_refresh field (e.g. model_name) is configured before its dependency (e.g. api_key), the server-side refresh fails. Instead of propagating a raw RuntimeError, the value is saved locally and a warning is returned telling the agent to set the credential first.
Summary
MCP server UX improvements, batch operations, spec-based flow creation, streaming, flow builder tools, and flow management for the Langflow assistant.
Tools (28)
logincreate_flow,delete_flow,duplicate_flow,rename_flowcreate_flow_from_spec,update_flow_from_speclist_flows,get_flow_info,export_flowlist_starter_projects,use_starter_projectadd_component,remove_component,configure_componentlist_components,get_component_info,componentssearch_component_types,describe_component_typeconnect_components,disconnect_componentsrun_flow,build_flow,validate_flowget_build_results,get_component_outputfreeze_component,unfreeze_component,layout_flow_toolbatch$N.fieldreferencesFlow Builder Tools (lfx)
Reusable building blocks for any consumer (assistant, MCP, CLI):
builder.py-- builds flow dicts from text specs using bundled component registry (no server needed)flow_builder_tools.py-- 9 Langflow components for agent tooling (search, describe, get_field_value, propose_field_edit, add/remove/connect/configure, build_flow)propose_field_edit-- validated JSON Patch generation with dry-run verificationflow_to_spec_summary-- compact flow summaries with component IDs for LLM contextStreaming
run_flowstreams token events via MCP progress notificationsResponse Quality
get_flow_infoandlist_flowsincludespec_summarywith component IDs and connection portscomponents()merges search + describe in one callvalidate_flowpolls build completion with timeout, returns structured per-component errorsexport_flowredacts sensitive fields (API keys, passwords) before returningBug Fixes
param_handlercrash when str-typed fields contain Message dicts from chat historyredact_templateshared reference mutationtool_modecoercion treating None as enabledlayout_flowindisconnect_componentscontextvarsfor session isolationbatcherror messagesexcept ExceptiontoImportErrorinflow_graph_reprTests