Skip to content

Fix multifaucet DB exec in CI by replacing docker_container_exec with…#736

Closed
vivekgsharma wants to merge 30 commits intov1.0-devfrom
vivek/devent-ci-testing
Closed

Fix multifaucet DB exec in CI by replacing docker_container_exec with…#736
vivekgsharma wants to merge 30 commits intov1.0-devfrom
vivek/devent-ci-testing

Conversation

@vivekgsharma
Copy link
Copy Markdown
Collaborator

@vivekgsharma vivekgsharma commented Mar 5, 2026

Testting Devnet creationg autmation using CI

Summary by CodeRabbit

  • Refactor

    • More robust container command execution and database readiness checks with higher retry tolerance.
    • Initialization loops updated to explicitly iterate masternode names for more predictable behavior.
    • Discovery step now reports no structural changes unless explicitly needed.
  • Bug Fixes

    • Promotional code schema and data loading made more reliable during deployment.
  • Chores

    • Deployment workflow: optional write-key, resume-capable create flow, safer config handling, and improved Terraform variable handling.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 5, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Replaced multifaucet docker_container_exec tasks with shell-based docker exec calls; extended DB wait retries and strengthened its success check. Introduced mn_names fact and switched masternode loops to iterate keys. Added resume-mode and optional write-key logic plus conditional config handling to create-devnet workflow.

Changes

Cohort / File(s) Summary
Multifaucet Docker -> Shell
ansible/roles/multifaucet/tasks/main.yml
Replaced docker_container_exec with ansible.builtin.shell using docker exec. Converted SQL execution tasks to inline -e SQL via shell. Rewrote DB wait as a multiline shell task; until checks both rc == 0 and stdout content and retries increased from 10 to 30.
Masternode iteration refactor
ansible/roles/mn_init/tasks/main.yml, ansible/roles/mn_protx_config/tasks/main.yml
Added mn_names fact (keys from mnlist when mapping). Updated loops to iterate mn_names and changed ProTx-result checks to explicit `length
mn_find_collateral changed semantics
ansible/roles/mn_find_collateral/tasks/main.yml
find-collateral.py task now uses changed_when: false unconditionally; collateral success still derived from the command return code (r.rc == 0) for downstream gating.
Create-devnet workflow & resume/write-key
.github/workflows/create-devnet.yml
Added deploy_tags input and optional SSH write key (EVO_APP_DEPLOY_WRITE_KEY) for pushes. Added pre-check to clone dash-network-configs and detect full/partial existing configs, setting resume_mode (true/false) or failing on partial sets. Conditionalized config generation and deploy commands for resume mode, ensured main_domain tfvar is populated/updated, and copy/push steps now only handle existing files (no-op when none).

Sequence Diagram(s)

sequenceDiagram
    participant Workflow as GitHub Actions
    participant Repo as dash-network-configs
    participant Generator as config generator
    participant Provisioner as ./bin/deploy (Terraform/Ansible)
    participant Git as Git remote

    Workflow->>Repo: clone dash-network-configs
    Workflow->>Repo: check for `NETWORK_NAME` files (*.yml, *.tfvars, *.inventory)
    alt all files present
        Workflow->>Workflow: set resume_mode = true\ncopy existing files into networks/
        Workflow->>Provisioner: run ./bin/deploy -p -f (provisioning-only)
    else none present
        Workflow->>Generator: generate configs
        Generator->>Workflow: produced files (yml, tfvars, inventory)
        Workflow->>Provisioner: run ./bin/deploy -f (full deploy)
    else some present
        Repo-->>Workflow: fail workflow (partial config detected)
    end
    Workflow->>Git: attempt push configs (use write-key if provided)
    alt push succeeds
        Git-->>Workflow: push OK
    else push fails
        Workflow-->>Workflow: emit warning / exit gracefully
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰
I swapped the shells and waited long,
Pulled names from lists to mend the song.
Resumed the build when configs stayed,
Pushed with a key, or gently swayed.
A happy hop — the network played. 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title mentions replacing docker_container_exec in multifaucet DB exec, which is a real change in the multifaucet tasks, but the PR includes substantial changes to other roles (mn_init, mn_protx_config, mn_find_collateral) and significant workflow modifications (create-devnet.yml) that are not captured by the title.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch vivek/devent-ci-testing

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
ansible/roles/multifaucet/tasks/main.yml (1)

47-50: Consider adding changed_when: false for idempotent SQL tasks.

The CREATE TABLE IF NOT EXISTS SQL is idempotent, but without changed_when: false, Ansible will report the task as "changed" on every run. This applies to the subsequent INSERT IGNORE tasks as well.

♻️ Proposed fix
 - name: Create promo codes table
   ansible.builtin.shell: |
     docker exec multifaucet-db-1 mysql -h localhost -D multifaucet -u multifaucet --password="multifaucet" \
       -e 'CREATE TABLE IF NOT EXISTS faucet_promo_codes (id int NOT NULL, code varchar(255), minimum_payout int, maximum_payout int, PRIMARY KEY (id));'
+  changed_when: false
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ansible/roles/multifaucet/tasks/main.yml` around lines 47 - 50, The SQL tasks
(e.g., the "Create promo codes table" task that runs the 'CREATE TABLE IF NOT
EXISTS faucet_promo_codes' command and the follow-up tasks that run 'INSERT
IGNORE') are idempotent but will always report changed; update these tasks to
include changed_when: false so Ansible does not mark them as changed on every
run. Locate the task named "Create promo codes table" and other tasks executing
INSERT IGNORE against faucet_promo_codes and add changed_when: false to each
task definition.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ansible/roles/multifaucet/tasks/main.yml`:
- Around line 52-55: The "Load MN promo code" ansible task's single-line SQL
exceeds the 160-char limit; split the SQL string across multiple lines inside
the ansible.builtin.shell block so no line is longer than 160 chars. Edit the
task named "Load MN promo code" and replace the long -e 'INSERT ...' single line
with a multiline SQL (preserve the docker exec ... -e '<SQL>' structure), e.g.
put the INSERT clause, the VALUES clause, and each value on separate lines while
keeping the masternode_collaterals.mn and {{ masternode_collaterals.mn | int +
10 }} variables intact and correctly quoted/templated so YAML indentation and
shell quoting remain valid.
- Around line 57-61: The shell task "Load HPMN promo code" contains a single
long -e SQL string that exceeds the 160-char limit; split the SQL argument
across multiple lines inside the existing ansible.builtin.shell block so no
single source line is >160 chars, keeping the docker exec invocation and Jinja
vars (masternode_collaterals.hpmn) intact and preserving proper quoting/escaping
for the -e argument and the INSERT IGNORE statement.

---

Nitpick comments:
In `@ansible/roles/multifaucet/tasks/main.yml`:
- Around line 47-50: The SQL tasks (e.g., the "Create promo codes table" task
that runs the 'CREATE TABLE IF NOT EXISTS faucet_promo_codes' command and the
follow-up tasks that run 'INSERT IGNORE') are idempotent but will always report
changed; update these tasks to include changed_when: false so Ansible does not
mark them as changed on every run. Locate the task named "Create promo codes
table" and other tasks executing INSERT IGNORE against faucet_promo_codes and
add changed_when: false to each task definition.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a4aca3fd-09f1-4b2b-8737-be9d111ea648

📥 Commits

Reviewing files that changed from the base of the PR and between e456676 and 2619eb6.

📒 Files selected for processing (1)
  • ansible/roles/multifaucet/tasks/main.yml

Comment thread ansible/roles/multifaucet/tasks/main.yml Outdated
Comment thread ansible/roles/multifaucet/tasks/main.yml
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ansible/roles/mn_init/tasks/main.yml`:
- Around line 42-50: The task "Build masternode names list from mnlist"
currently sets mn_names to [] when mnlist is not a mapping; change it to
validate mnlist and fail fast: if mnlist is a mapping keep the existing
expression to build mn_names (using mnlist | dict2items | map(attribute='key') |
list), otherwise call ansible.builtin.fail with a clear error message indicating
mnlist must be a mapping so the playbook errors on bad input instead of silently
producing an empty mn_names.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6a4ed640-6b28-437b-ab9e-b70dc2037f76

📥 Commits

Reviewing files that changed from the base of the PR and between 2619eb6 and d6d3778.

📒 Files selected for processing (1)
  • ansible/roles/mn_init/tasks/main.yml

Comment on lines +42 to +50
- name: Build masternode names list from mnlist
ansible.builtin.set_fact:
mn_names: >-
{{
(mnlist | dict2items | map(attribute='key') | list)
if (mnlist is mapping)
else []
}}

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail fast on invalid mnlist types instead of silently no-oping.

Line [48] defaults to [] for non-mapping mnlist, which can make this role pass while doing nothing if input shape is wrong. Since mnlist is expected as a mapping (see ansible/deploy.yml), this should be treated as a configuration error.

Proposed fix
+- name: Validate mnlist input type
+  ansible.builtin.assert:
+    that:
+      - mnlist is mapping
+    fail_msg: "mnlist must be a mapping of masternode_name -> config object."
+  when: mnlist is defined and (mnlist | length > 0)

 - name: Build masternode names list from mnlist
   ansible.builtin.set_fact:
-    mn_names: >-
-      {{
-        (mnlist | dict2items | map(attribute='key') | list)
-        if (mnlist is mapping)
-        else []
-      }}
+    mn_names: "{{ (mnlist | default({}) | dict2items | map(attribute='key') | list) }}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Build masternode names list from mnlist
ansible.builtin.set_fact:
mn_names: >-
{{
(mnlist | dict2items | map(attribute='key') | list)
if (mnlist is mapping)
else []
}}
- name: Validate mnlist input type
ansible.builtin.assert:
that:
- mnlist is mapping
fail_msg: "mnlist must be a mapping of masternode_name -> config object."
when: mnlist is defined and (mnlist | length > 0)
- name: Build masternode names list from mnlist
ansible.builtin.set_fact:
mn_names: "{{ (mnlist | default({}) | dict2items | map(attribute='key') | list) }}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ansible/roles/mn_init/tasks/main.yml` around lines 42 - 50, The task "Build
masternode names list from mnlist" currently sets mn_names to [] when mnlist is
not a mapping; change it to validate mnlist and fail fast: if mnlist is a
mapping keep the existing expression to build mn_names (using mnlist |
dict2items | map(attribute='key') | list), otherwise call ansible.builtin.fail
with a clear error message indicating mnlist must be a mapping so the playbook
errors on bad input instead of silently producing an empty mn_names.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
.github/workflows/create-devnet.yml (1)

328-331: Consider restricting config push to successful deployments.

With if: always(), this step runs even when the deploy step fails. Combined with the soft-fail exit 0 on push errors, partially-created or corrupted configs from a failed deployment could be committed to the config repo.

While lines 348-351 handle the "no files" case, config files may exist but be in an invalid state if the deployment failed mid-way (e.g., terraform applied but ansible failed).

Option 1: Only push on success (safest)
     - name: Push configs to dash-network-configs
-      if: always()
+      if: success()
       env:
Option 2: Add explicit condition excluding resume mode failures
     - name: Push configs to dash-network-configs
-      if: always()
+      if: success() || (failure() && steps.existing_configs.outputs.resume_mode != 'true')
       env:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/create-devnet.yml around lines 328 - 331, The workflow
step "Push configs to dash-network-configs" currently uses "if: always()", which
lets it run even when deployment failed; change its condition to run only after
a successful deploy by replacing "if: always()" with a success predicate such as
"if: ${{ success() }}" or, if your deploy job/step is named (e.g. steps.deploy),
use "if: ${{ steps.deploy.outcome == 'success' }}" to explicitly require the
deploy succeeded (or add additional checks to exclude resume-mode failures) so
configs are pushed only on successful deployments.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/create-devnet.yml:
- Around line 285-292: The grep/sed regexes for updating TFVARS_FILE are
inconsistent: the earlier check uses Perl-style \s while the later grep -q and
sed use basic regex where \s is literal, causing misses and duplicate keys
(symbols: CURRENT_MAIN_DOMAIN, DEFAULT_MAIN_DOMAIN, TFVARS_FILE). Fix by using a
consistent POSIX regex (e.g., replace \s* with [[:space:]]* in both the grep and
sed expressions) or enable extended/perl regex (use grep -E/-P and sed -E) so
the existence check and the sed substitution both match lines like `main_domain
= ""` and update instead of appending. Ensure the same pattern is used in the
grep -q '^main_domain[[:space:]]*=' and sed substitution for
^main_domain[[:space:]]*= so duplicates are avoided.

---

Nitpick comments:
In @.github/workflows/create-devnet.yml:
- Around line 328-331: The workflow step "Push configs to dash-network-configs"
currently uses "if: always()", which lets it run even when deployment failed;
change its condition to run only after a successful deploy by replacing "if:
always()" with a success predicate such as "if: ${{ success() }}" or, if your
deploy job/step is named (e.g. steps.deploy), use "if: ${{ steps.deploy.outcome
== 'success' }}" to explicitly require the deploy succeeded (or add additional
checks to exclude resume-mode failures) so configs are pushed only on successful
deployments.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e37ba8ff-ba8f-4113-af18-6ebb88a93c36

📥 Commits

Reviewing files that changed from the base of the PR and between d6d3778 and 20f92a4.

📒 Files selected for processing (2)
  • .github/workflows/create-devnet.yml
  • ansible/roles/mn_protx_config/tasks/main.yml

Comment on lines +285 to +292
if [[ -z "$CURRENT_MAIN_DOMAIN" ]]; then
echo "Setting main_domain to $DEFAULT_MAIN_DOMAIN..."
if grep -q '^main_domain\s*=' "$TFVARS_FILE"; then
sed -i "s|^main_domain\\s*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE"
else
echo "main_domain = \"$DEFAULT_MAIN_DOMAIN\"" >> "$TFVARS_FILE"
fi
fi
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Regex character class mismatch may cause incorrect tfvars updates.

Line 282 uses grep -oP (Perl regex) where \s* matches whitespace. However, lines 287-288 use basic grep -q and sed without -E/-P flags, where \s is treated literally, not as a whitespace class.

If the tfvars file contains main_domain = "" (with spaces around =), line 282 will detect it as empty, but line 287's grep won't match ^main_domain\s*=, causing line 290 to append a duplicate main_domain key instead of updating the existing one.

Proposed fix: use consistent regex syntax
-          if grep -q '^main_domain\s*=' "$TFVARS_FILE"; then
-            sed -i "s|^main_domain\\s*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE"
+          if grep -qE '^main_domain[[:space:]]*=' "$TFVARS_FILE"; then
+            sed -i -E "s|^main_domain[[:space:]]*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [[ -z "$CURRENT_MAIN_DOMAIN" ]]; then
echo "Setting main_domain to $DEFAULT_MAIN_DOMAIN..."
if grep -q '^main_domain\s*=' "$TFVARS_FILE"; then
sed -i "s|^main_domain\\s*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE"
else
echo "main_domain = \"$DEFAULT_MAIN_DOMAIN\"" >> "$TFVARS_FILE"
fi
fi
if [[ -z "$CURRENT_MAIN_DOMAIN" ]]; then
echo "Setting main_domain to $DEFAULT_MAIN_DOMAIN..."
if grep -qE '^main_domain[[:space:]]*=' "$TFVARS_FILE"; then
sed -i -E "s|^main_domain[[:space:]]*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE"
else
echo "main_domain = \"$DEFAULT_MAIN_DOMAIN\"" >> "$TFVARS_FILE"
fi
fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/create-devnet.yml around lines 285 - 292, The grep/sed
regexes for updating TFVARS_FILE are inconsistent: the earlier check uses
Perl-style \s while the later grep -q and sed use basic regex where \s is
literal, causing misses and duplicate keys (symbols: CURRENT_MAIN_DOMAIN,
DEFAULT_MAIN_DOMAIN, TFVARS_FILE). Fix by using a consistent POSIX regex (e.g.,
replace \s* with [[:space:]]* in both the grep and sed expressions) or enable
extended/perl regex (use grep -E/-P and sed -E) so the existence check and the
sed substitution both match lines like `main_domain = ""` and update instead of
appending. Ensure the same pattern is used in the grep -q
'^main_domain[[:space:]]*=' and sed substitution for ^main_domain[[:space:]]*=
so duplicates are avoided.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
.github/workflows/create-devnet.yml (1)

292-296: ⚠️ Potential issue | 🟡 Minor

Regex character class inconsistency may cause duplicate main_domain entries.

Line 287 uses Perl regex (-oP) where \s* correctly matches whitespace, but lines 292-293 use basic grep/sed where \s is treated as a literal backslash-s character. If the tfvars file contains main_domain = "" with spaces around =, the detection will find it empty but the update pattern won't match, causing a duplicate key to be appended.

🔧 Proposed fix using POSIX character classes
-          if grep -q '^main_domain\s*=' "$TFVARS_FILE"; then
-            sed -i "s|^main_domain\\s*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE"
+          if grep -qE '^main_domain[[:space:]]*=' "$TFVARS_FILE"; then
+            sed -i -E "s|^main_domain[[:space:]]*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/create-devnet.yml around lines 292 - 296, The grep/sed
block that updates TFVARS_FILE uses grep -q '^main_domain\s*=' and sed -i
"s|^main_domain\\s*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" which mixes
Perl-style \s with basic grep/sed, causing the sed match to fail and append a
duplicate; update the sed (and grep if desired) to use POSIX whitespace classes
(e.g. [[:space:]]*) so both detection and replacement consistently match
variants like `main_domain = ""` — modify the grep -q and the sed -i patterns to
use [[:space:]]* (or switch both to a tool that supports \s) while keeping
TFVARS_FILE and DEFAULT_MAIN_DOMAIN unchanged.
🧹 Nitpick comments (3)
ansible/roles/mn_find_collateral/tasks/main.yml (1)

32-37: Consider cleaning up the informal comment.

The comment "Wow does this work?" on line 32 reads as leftover debug/discovery commentary. If this dynamic fact-setting pattern is now established and tested, consider replacing it with a clarifying comment or removing it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ansible/roles/mn_find_collateral/tasks/main.yml` around lines 32 - 37, The
informal comment "Wow does this work?" above the "Set global masternode facts"
task is leftover debugging chatter; remove or replace it with a concise,
clarifying comment explaining the purpose of the task (e.g., "Export
per-masternode collateral facts for later use") so its intent is clear; keep the
dynamic fact names (collateral_ok_{{masternode_name | replace("-","_")}},
collateral_txid_{{masternode_name | replace("-","_")}},
collateral_vout_{{masternode_name | replace("-","_")}}) unchanged.
.github/workflows/create-devnet.yml (2)

186-186: Consider adding error handling for the git clone operation.

If the clone fails (network issues, authentication problems, repo unavailable), the subsequent file checks will fail with confusing error messages. Adding explicit error handling would make troubleshooting easier.

💡 Proposed improvement
-          git clone git@github.com:dashpay/dash-network-configs.git /tmp/dash-network-configs-source
+          if ! git clone git@github.com:dashpay/dash-network-configs.git /tmp/dash-network-configs-source; then
+            echo "::error::Failed to clone dash-network-configs repository"
+            exit 1
+          fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/create-devnet.yml at line 186, Add explicit error handling
around the git clone command so failures produce a clear, actionable message
instead of causing downstream file-check errors; specifically, after the git
clone git@github.com:dashpay/dash-network-configs.git
/tmp/dash-network-configs-source step detect non-zero exit status (or check for
the existence of /tmp/dash-network-configs-source/.git) and fail the job with an
informative error message indicating clone failure and likely causes
(network/auth/repo unavailable), and optionally retry or print git error output
to aid debugging.

369-377: Silent push failure may go unnoticed in workflow results.

Both push attempts use exit 0 on failure, meaning the workflow will report success even if configs weren't pushed. While the ::warning:: annotations help, they're easy to miss in the Actions UI. Consider whether a failed push should be more visible or require explicit acknowledgment.

If the current behavior is intentional (configs push is best-effort), this is fine. Otherwise, you might want to set an output variable or use exit 1 to make failures more visible.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/create-devnet.yml around lines 369 - 377, The workflow
currently swallows git push failures by using "exit 0" in both branches after
GIT_SSH_COMMAND='ssh -i ~/.ssh/id_ed25519_write -o StrictHostKeyChecking=no' git
push and the fallback git push; update this so failures are surfaced: either
change the "exit 0" to "exit 1" to fail the job on push error, or instead set a
workflow output/step variable (e.g., set-output or echo "::set-output
name=config_push_failed::true") and add a later step that requires explicit
acknowledgment; modify the two push error handlers accordingly so the chosen
behavior is applied to both the GIT_SSH_COMMAND push and the fallback git push.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In @.github/workflows/create-devnet.yml:
- Around line 292-296: The grep/sed block that updates TFVARS_FILE uses grep -q
'^main_domain\s*=' and sed -i "s|^main_domain\\s*=.*|main_domain =
\"$DEFAULT_MAIN_DOMAIN\"|" which mixes Perl-style \s with basic grep/sed,
causing the sed match to fail and append a duplicate; update the sed (and grep
if desired) to use POSIX whitespace classes (e.g. [[:space:]]*) so both
detection and replacement consistently match variants like `main_domain = ""` —
modify the grep -q and the sed -i patterns to use [[:space:]]* (or switch both
to a tool that supports \s) while keeping TFVARS_FILE and DEFAULT_MAIN_DOMAIN
unchanged.

---

Nitpick comments:
In @.github/workflows/create-devnet.yml:
- Line 186: Add explicit error handling around the git clone command so failures
produce a clear, actionable message instead of causing downstream file-check
errors; specifically, after the git clone
git@github.com:dashpay/dash-network-configs.git /tmp/dash-network-configs-source
step detect non-zero exit status (or check for the existence of
/tmp/dash-network-configs-source/.git) and fail the job with an informative
error message indicating clone failure and likely causes (network/auth/repo
unavailable), and optionally retry or print git error output to aid debugging.
- Around line 369-377: The workflow currently swallows git push failures by
using "exit 0" in both branches after GIT_SSH_COMMAND='ssh -i
~/.ssh/id_ed25519_write -o StrictHostKeyChecking=no' git push and the fallback
git push; update this so failures are surfaced: either change the "exit 0" to
"exit 1" to fail the job on push error, or instead set a workflow output/step
variable (e.g., set-output or echo "::set-output name=config_push_failed::true")
and add a later step that requires explicit acknowledgment; modify the two push
error handlers accordingly so the chosen behavior is applied to both the
GIT_SSH_COMMAND push and the fallback git push.

In `@ansible/roles/mn_find_collateral/tasks/main.yml`:
- Around line 32-37: The informal comment "Wow does this work?" above the "Set
global masternode facts" task is leftover debugging chatter; remove or replace
it with a concise, clarifying comment explaining the purpose of the task (e.g.,
"Export per-masternode collateral facts for later use") so its intent is clear;
keep the dynamic fact names (collateral_ok_{{masternode_name |
replace("-","_")}}, collateral_txid_{{masternode_name | replace("-","_")}},
collateral_vout_{{masternode_name | replace("-","_")}}) unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9ce3ecf2-28b9-4eeb-9d94-8a4fbc2b707d

📥 Commits

Reviewing files that changed from the base of the PR and between 3ab4b34 and 58ac3a0.

📒 Files selected for processing (2)
  • .github/workflows/create-devnet.yml
  • ansible/roles/mn_find_collateral/tasks/main.yml

@vivekgsharma vivekgsharma force-pushed the vivek/devent-ci-testing branch 2 times, most recently from 90d0922 to bf599ed Compare March 30, 2026 08:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant