diff --git a/images/dib/elements/hotstack-sonic-vs/README.rst b/images/dib/elements/hotstack-sonic-vs/README.rst index 3c39428d..70fefd51 100644 --- a/images/dib/elements/hotstack-sonic-vs/README.rst +++ b/images/dib/elements/hotstack-sonic-vs/README.rst @@ -3,8 +3,8 @@ hotstack-sonic-vs ================== This element creates a CentOS 9 Stream image that runs SONiC -(Software for Open Networking in the Cloud) as a podman container -with direct interface movement networking. +(Software for Open Networking in the Cloud) using systemd-nspawn +with a persistent network namespace. Environment Variables ===================== @@ -18,12 +18,14 @@ Overview The image includes: -- Podman for running the SONiC container -- Custom SONiC-VS image with SSH access and admin user pre-configured +- systemd-nspawn for running the SONiC container +- Custom SONiC-VS rootfs with SSH access and admin user pre-configured +- Persistent network namespace that survives container restarts - Systemd service (sonic.service) to manage the container lifecycle -- Python startup script (start-sonic) that moves host interfaces into container +- Python startup script (start-sonic) that creates namespace and moves interfaces - Default minimal configuration for management access - Support for config_db.json configuration format +- Simplified lane mapping (sequential 0-31 instead of S6000 defaults) Configuration ============= @@ -36,27 +38,78 @@ The config file contains shell-style variables for the host configuration: - SWITCH_INTERFACE_START: First interface to move to container (default: eth1) - SWITCH_INTERFACE_COUNT: Number of interfaces to move (default: 5) - SWITCH_HOSTNAME: SONiC hostname (default: sonic) -- SONIC_IMAGE: Podman image tag (default: localhost/docker-sonic-vs:hotstack) - -Host interfaces are moved directly into the container namespace: -- Host eth1 -> Container eth0 (SONiC Management0) -- Host eth2 -> Container eth1 (SONiC Ethernet0) -- Host eth3 -> Container eth2 (SONiC Ethernet1) -- etc. +Host interfaces are moved into a persistent network namespace (sonic-ns) +before the container starts, ensuring they survive container restarts. +The mapping to SONiC ports is via lanemap.ini: +- Host eth1 -> Namespace eth1 -> SONiC Ethernet0 (lane 0) +- Host eth2 -> Namespace eth2 -> SONiC Ethernet4 (lane 1) +- Host eth3 -> Namespace eth3 -> SONiC Ethernet8 (lane 2) +- etc. (sequential lane numbering) The config_db.json file contains SONiC native configuration in JSON format. +Interface Mapping +================== + +This image uses simplified sequential lane numbering (0-31) instead of the +default Dell S6000/BCM56850 mapping (25-40, etc.). This makes the configuration +more intuitive for lab environments. + +Custom port_config.ini and lanemap.ini are built into the image with: +- 1 lane per interface (instead of 4-lane QSFP) +- 10G speed per interface (adequate for lab testing) +- Sequential lane numbers starting at 0 + +Example mapping:: + + eth1 -> lane 0 -> Ethernet0 (10G) + eth2 -> lane 1 -> Ethernet4 (10G) + eth3 -> lane 2 -> Ethernet8 (10G) + eth4 -> lane 3 -> Ethernet12 (10G) + +The config_db.json PORT section must match these lane numbers. Example:: + + "PORT": { + "Ethernet0": {"lanes": "0", "speed": "10000", ...}, + "Ethernet4": {"lanes": "1", "speed": "10000", ...}, + ... + } + +Network Namespace Architecture +================================ + +The container uses a persistent network namespace (sonic-ns) that survives +container restarts: + +1. **Setup Phase** (before container starts): + - Create persistent namespace: /var/run/netns/sonic-ns + - Move host interfaces (eth1, eth2, etc.) into sonic-ns + - Interfaces remain in namespace even if container crashes + +2. **Container Start**: + - systemd-nspawn joins the sonic-ns namespace + - SONiC processes see eth1, eth2, etc. immediately + - No race conditions or timing issues + +3. **Container Restart**: + - Namespace persists with interfaces intact + - Container rejoins same namespace + - No interface movement needed + +This solves the critical issue where interfaces were lost when the podman +container restarted, as they were tied to the container's ephemeral namespace. + SSH Access ========== -The custom SONiC-VS image includes SSH access pre-configured: +The custom SONiC-VS rootfs includes SSH access pre-configured: - **Admin user**: Pre-created with sudo, redis, and frrvty groups - **SSH daemon**: Starts automatically via supervisord - **Authentication**: Uses SSH keys from /etc/hotstack-sonic/authorized_keys **IMPORTANT**: The authorized_keys file is REQUIRED and must be created via -cloud-init. The container will not start without it. +cloud-init. Example cloud-init configuration:: @@ -71,28 +124,98 @@ To access the switch via SSH:: ssh admin@ +Or from the host using machinectl:: + + machinectl shell sonic + The admin user has full sudo access (passwordless) and can run all SONiC CLI commands (show, config) and FRR commands (vtysh). -Custom Image Build -================== +Custom Rootfs Build +=================== -During disk image creation (DIB), a custom SONiC-VS image is built: +During disk image creation (DIB), a custom SONiC-VS rootfs is prepared: -1. Base SONiC-VS image is loaded on the build host +1. Base SONiC-VS podman image is loaded on the build host 2. Custom image is built using Containerfile from the DIB element -3. Custom image is saved and included in the disk image -4. On first boot, the pre-built custom image is simply loaded +3. Custom image is saved as a tar archive and included in the disk image +4. On first boot, sonic-import.service extracts the rootfs to /var/lib/machines/sonic -The custom image adds: +The custom rootfs adds: - sudo package - admin user with proper groups (sudo, redis, frrvty) - SSH host keys and daemon configuration - Passwordless sudo for admin user +- Simplified port_config.ini and lanemap.ini with sequential lane numbering This approach ensures consistent images across all deployments and faster -first boot times compared to building the image at runtime. +first boot times compared to building at runtime. -To customize the image, edit the Containerfile and sshd.conf in the +To customize the rootfs, edit the Containerfile in the images/dib/elements/hotstack-sonic-vs/extra-data.d/ directory and rebuild the disk image. + +Startup Sequence +================ + +1. **sonic-import.service** (one-time, first boot): + - Extracts SONiC rootfs from tar archive to /var/lib/machines/sonic + - Creates marker file to prevent re-extraction + +2. **sonic.service ExecStartPre**: + - Runs: /usr/local/bin/start-sonic setup + - Creates persistent network namespace (sonic-ns) + - Moves host interfaces into the namespace + - Prepares configuration files + +3. **sonic.service ExecStart**: + - Runs: systemd-nspawn with --network-namespace-path=/var/run/netns/sonic-ns + - Container boots and joins the persistent namespace + - SONiC services start with interfaces already present + - No race conditions or timing issues + +4. **Post-start** (optional): + - Runs: /usr/local/bin/start-sonic (config mode) + - Waits for SONiC services (redis, syncd) + - Configures management interface from config_db.json + +Troubleshooting +=============== + +Check namespace and interfaces:: + + # List network namespaces + ip netns list + + # Check interfaces in sonic-ns namespace + ip netns exec sonic-ns ip link show + + # Check container status + machinectl status sonic + + # View container logs + journalctl -u sonic.service -f + +Access container:: + + # Interactive shell + machinectl shell sonic + + # Run command + machinectl shell sonic /usr/bin/supervisorctl status + +Inside container, verify SONiC:: + + # Check interfaces + ip link show + + # Check SONiC configuration + cat /usr/share/sonic/hwsku/lanemap.ini + cat /usr/share/sonic/hwsku/port_config.ini + + # Check SONiC services + supervisorctl status + + # Access SONiC CLI + show interfaces status + show ip ospf neighbor diff --git a/images/dib/elements/hotstack-sonic-vs/extra-data.d/11-copy-sonic-image b/images/dib/elements/hotstack-sonic-vs/extra-data.d/11-copy-sonic-image index 7605d445..af17359d 100755 --- a/images/dib/elements/hotstack-sonic-vs/extra-data.d/11-copy-sonic-image +++ b/images/dib/elements/hotstack-sonic-vs/extra-data.d/11-copy-sonic-image @@ -39,23 +39,28 @@ trap 'rm -rf "${BUILD_DIR}"' EXIT # Get the directory where this script is located (extra-data.d) SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -# Copy Containerfile, sshd.conf, and docker-wrapper.sh to build context +# Copy build files to build context cp "${SCRIPT_DIR}/Containerfile" "${BUILD_DIR}/" cp "${SCRIPT_DIR}/sshd.conf" "${BUILD_DIR}/" +cp "${SCRIPT_DIR}/bgpd.conf" "${BUILD_DIR}/" +cp "${SCRIPT_DIR}/ospfd.conf" "${BUILD_DIR}/" cp "${SCRIPT_DIR}/docker-wrapper.sh" "${BUILD_DIR}/" # Build custom image podman build -t "localhost/docker-sonic-vs:hotstack" -f "${BUILD_DIR}/Containerfile" "${BUILD_DIR}" -echo "INFO: Saving custom SONiC-VS image as podman archive" -mkdir -p "${TMP_HOOKS_PATH}" -podman save -o "${TMP_HOOKS_PATH}/sonic-image.tar" "localhost/docker-sonic-vs:hotstack" +echo "INFO: Extracting SONiC rootfs for systemd-nspawn" +mkdir -p "${TMP_HOOKS_PATH}/sonic-rootfs" +CONTAINER_ID=$(podman create localhost/docker-sonic-vs:hotstack) +podman export "${CONTAINER_ID}" | tar -C "${TMP_HOOKS_PATH}/sonic-rootfs" -xf - +podman rm "${CONTAINER_ID}" -echo "INFO: Compressing SONiC podman archive" -gzip "${TMP_HOOKS_PATH}/sonic-image.tar" +echo "INFO: Creating rootfs archive" +tar -C "${TMP_HOOKS_PATH}/sonic-rootfs" -czf "${TMP_HOOKS_PATH}/sonic-rootfs.tar.gz" . -echo "INFO: Cleaning up images from build host" +echo "INFO: Cleaning up" +rm -rf "${TMP_HOOKS_PATH}/sonic-rootfs" podman rmi "localhost/docker-sonic-vs:hotstack" podman rmi "localhost/docker-sonic-vs:latest" -echo "INFO: Custom SONiC-VS image built and ready for installation (version: ${SONIC_VERSION})" +echo "INFO: Custom SONiC-VS rootfs built and ready for installation (version: ${SONIC_VERSION})" diff --git a/images/dib/elements/hotstack-sonic-vs/extra-data.d/Containerfile b/images/dib/elements/hotstack-sonic-vs/extra-data.d/Containerfile index 6fc2355b..3c2afade 100644 --- a/images/dib/elements/hotstack-sonic-vs/extra-data.d/Containerfile +++ b/images/dib/elements/hotstack-sonic-vs/extra-data.d/Containerfile @@ -19,15 +19,17 @@ ARG BASE_IMAGE=localhost/docker-sonic-vs:latest FROM ${BASE_IMAGE} -# Install sudo, create admin user, and configure SSH +# Install sudo, create admin user with explicit UID/GID, and configure SSH +# Using UID/GID 1000 for admin user to ensure consistency RUN apt-get update && \ apt-get install -y sudo && \ rm -rf /var/lib/apt/lists/* && \ - useradd -m -s /bin/bash -G sudo,redis,frrvty admin && \ + groupadd -g 1000 admin && \ + useradd -m -u 1000 -g 1000 -s /bin/bash -G sudo,redis,frrvty admin && \ echo "admin ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/admin && \ chmod 0440 /etc/sudoers.d/admin && \ mkdir -p /home/admin/.ssh && \ - chown admin:admin /home/admin/.ssh && \ + chown 1000:1000 /home/admin/.ssh && \ chmod 700 /home/admin/.ssh && \ ssh-keygen -A && \ mkdir -p /run/sshd @@ -36,8 +38,10 @@ RUN apt-get update && \ COPY docker-wrapper.sh /usr/local/bin/docker RUN chmod +x /usr/local/bin/docker -# Add supervisord configuration for sshd +# Add supervisord configuration for sshd, bgpd, ospfd COPY sshd.conf /etc/supervisor/conf.d/sshd.conf +COPY bgpd.conf /etc/supervisor/conf.d/bgpd.conf +COPY ospfd.conf /etc/supervisor/conf.d/ospfd.conf # Metadata LABEL maintainer="hotstack" diff --git a/images/dib/elements/hotstack-sonic-vs/static/var/lib/sonic/sshd.conf b/images/dib/elements/hotstack-sonic-vs/extra-data.d/bgpd.conf similarity index 83% rename from images/dib/elements/hotstack-sonic-vs/static/var/lib/sonic/sshd.conf rename to images/dib/elements/hotstack-sonic-vs/extra-data.d/bgpd.conf index 8c67e9cb..14e1f0c4 100644 --- a/images/dib/elements/hotstack-sonic-vs/static/var/lib/sonic/sshd.conf +++ b/images/dib/elements/hotstack-sonic-vs/extra-data.d/bgpd.conf @@ -13,10 +13,10 @@ # License for the specific language governing permissions and limitations # under the License. -# Supervisord configuration for SSH daemon -[program:sshd] -command=/usr/sbin/sshd -D -priority=3 +# Supervisord configuration for BGP daemon (FRR) +[program:bgpd] +command=/usr/lib/frr/bgpd -A 127.0.0.1 -f /etc/frr/frr.conf +priority=5 autostart=true autorestart=true stdout_logfile=syslog diff --git a/images/dib/elements/hotstack-sonic-vs/extra-data.d/ospfd.conf b/images/dib/elements/hotstack-sonic-vs/extra-data.d/ospfd.conf new file mode 100644 index 00000000..362486fe --- /dev/null +++ b/images/dib/elements/hotstack-sonic-vs/extra-data.d/ospfd.conf @@ -0,0 +1,23 @@ +# Copyright Red Hat, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +# Supervisord configuration for OSPF daemon (FRR) +[program:ospfd] +command=/usr/lib/frr/ospfd -A 127.0.0.1 -f /etc/frr/frr.conf +priority=5 +autostart=true +autorestart=true +stdout_logfile=syslog +stderr_logfile=syslog diff --git a/images/dib/elements/hotstack-sonic-vs/install.d/50-import-sonic-images b/images/dib/elements/hotstack-sonic-vs/install.d/50-import-sonic-images index 4b1a5904..22aaefcf 100755 --- a/images/dib/elements/hotstack-sonic-vs/install.d/50-import-sonic-images +++ b/images/dib/elements/hotstack-sonic-vs/install.d/50-import-sonic-images @@ -20,10 +20,10 @@ if [ "${DIB_DEBUG_TRACE:-0}" -gt 0 ]; then set -x fi -SONIC_IMAGE_PATH="/tmp/in_target.d/sonic-image.tar.gz" +SONIC_ROOTFS_PATH="/tmp/in_target.d/sonic-rootfs.tar.gz" -if [ ! -f "${SONIC_IMAGE_PATH}" ]; then - echo "ERROR: SONiC image file not found in build directory: ${SONIC_IMAGE_PATH}" +if [ ! -f "${SONIC_ROOTFS_PATH}" ]; then + echo "ERROR: SONiC rootfs file not found in build directory: ${SONIC_ROOTFS_PATH}" echo "ERROR: This should have been processed by extra-data.d/11-copy-sonic-image" exit 1 fi @@ -36,8 +36,11 @@ if [ -z "${SONIC_VERSION}" ]; then SONIC_VERSION="latest" fi +echo "INFO: Extracting SONiC rootfs to /var/lib/machines/sonic (version: ${SONIC_VERSION})" +mkdir -p /var/lib/machines/sonic +tar -C /var/lib/machines/sonic -xzf "${SONIC_ROOTFS_PATH}" + mkdir -p /var/lib/sonic -cp "${SONIC_IMAGE_PATH}" /var/lib/sonic/sonic-image.tar.gz echo "${SONIC_VERSION}" > /var/lib/sonic/sonic-version.txt -echo "INFO: SONiC podman archive installed (version: ${SONIC_VERSION})" +echo "INFO: SONiC rootfs installed successfully" diff --git a/images/dib/elements/hotstack-sonic-vs/package-installs.yaml b/images/dib/elements/hotstack-sonic-vs/package-installs.yaml index 14a8a6ff..694992aa 100644 --- a/images/dib/elements/hotstack-sonic-vs/package-installs.yaml +++ b/images/dib/elements/hotstack-sonic-vs/package-installs.yaml @@ -4,5 +4,6 @@ iproute: nmstate: podman: python3-jinja2: +systemd-container: tcpdump: vim-enhanced: diff --git a/images/dib/elements/hotstack-sonic-vs/static/etc/systemd/system/sonic-import.service b/images/dib/elements/hotstack-sonic-vs/post-install.d/60-set-permissions old mode 100644 new mode 100755 similarity index 58% rename from images/dib/elements/hotstack-sonic-vs/static/etc/systemd/system/sonic-import.service rename to images/dib/elements/hotstack-sonic-vs/post-install.d/60-set-permissions index 63315c4e..b5a6046a --- a/images/dib/elements/hotstack-sonic-vs/static/etc/systemd/system/sonic-import.service +++ b/images/dib/elements/hotstack-sonic-vs/post-install.d/60-set-permissions @@ -1,3 +1,4 @@ +#!/bin/bash # Copyright Red Hat, Inc. # All Rights Reserved. # @@ -13,18 +14,13 @@ # License for the specific language governing permissions and limitations # under the License. -[Unit] -Description=Load SONiC Container Image -Before=sonic.service -ConditionPathExists=/var/lib/sonic/sonic-image.tar.gz -ConditionPathExists=!/var/lib/sonic/.image-imported +set -eu -o pipefail -[Service] -Type=oneshot -RemainAfterExit=yes -ExecStart=/bin/bash -c 'podman load -i /var/lib/sonic/sonic-image.tar.gz && \ - touch /var/lib/sonic/.image-imported && \ - echo "SONiC image loaded successfully"' +if [ "${DIB_DEBUG_TRACE:-0}" -gt 0 ]; then + set -x +fi -[Install] -WantedBy=multi-user.target +# Ensure start-sonic script is executable +chmod +x /usr/local/bin/start-sonic + +echo "INFO: Set executable permission for start-sonic script" diff --git a/images/dib/elements/hotstack-sonic-vs/post-install.d/80-enable-services b/images/dib/elements/hotstack-sonic-vs/post-install.d/80-enable-services index 686bc519..2fb6eaba 100755 --- a/images/dib/elements/hotstack-sonic-vs/post-install.d/80-enable-services +++ b/images/dib/elements/hotstack-sonic-vs/post-install.d/80-enable-services @@ -20,9 +20,8 @@ if [ "${DIB_DEBUG_TRACE:-0}" -gt 0 ]; then set -x fi -echo "INFO: Enabling SONiC systemd services..." +echo "INFO: Enabling SONiC systemd service..." -systemctl enable sonic-import.service systemctl enable sonic.service -echo "INFO: SONiC services enabled successfully" +echo "INFO: SONiC service enabled successfully" diff --git a/images/dib/elements/hotstack-sonic-vs/static/etc/hotstack-sonic/README b/images/dib/elements/hotstack-sonic-vs/static/etc/hotstack-sonic/README index 2d4fd9db..02993e8b 100644 --- a/images/dib/elements/hotstack-sonic-vs/static/etc/hotstack-sonic/README +++ b/images/dib/elements/hotstack-sonic-vs/static/etc/hotstack-sonic/README @@ -22,6 +22,26 @@ frr.conf FRRouting configuration file. This file is mounted into the SONiC container at /etc/frr/frr.conf. +lanemap.ini (REQUIRED) + SONiC-VS interface mapping file. Maps host eth interfaces to SONiC + lane numbers, which are then mapped to Ethernet ports. This file is + mounted into the SONiC container at /usr/share/sonic/hwsku/lanemap.ini. + This file is REQUIRED - the container will not start without it. + + Format: eth: + Example: + eth1:25,26,27,28 + eth2:29,30,31,32 + eth3:33,34,35,36 + + The lane numbers must match those defined in port_config.ini. The + standard mapping is: + eth1 -> lanes 25-28 -> Ethernet0 + eth2 -> lanes 29-32 -> Ethernet4 + eth3 -> lanes 33-36 -> Ethernet8 + eth4 -> lanes 37-40 -> Ethernet12 + etc. (continuing with +4 increment) + sonic_version.yml SONiC version information file. Required by SONiC services. If not provided, a default version file will be used. @@ -51,14 +71,27 @@ Use write_files to create these configuration files: } } } + - path: /etc/hotstack-sonic/lanemap.ini + content: | + # Maps eth1-8 to SONiC lanes + eth1:25,26,27,28 + eth2:29,30,31,32 + eth3:33,34,35,36 + eth4:37,38,39,40 + eth5:41,42,43,44 + eth6:45,46,47,48 + eth7:49,50,51,52 + eth8:53,54,55,56 + owner: root:root + permissions: '0644' - path: /etc/hotstack-sonic/authorized_keys content: | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC... user@host owner: root:root permissions: '0644' -Note: The authorized_keys file is REQUIRED. You can use Heat parameters -to inject SSH keys, for example: +Note: Both authorized_keys and lanemap.ini files are REQUIRED. You can +use Heat parameters to inject SSH keys, for example: parameters: controller_ssh_pub_key: @@ -69,5 +102,9 @@ to inject SSH keys, for example: properties: user_data: write_files: + - path: /etc/hotstack-sonic/lanemap.ini + content: | + eth1:25,26,27,28 + eth2:29,30,31,32 - path: /etc/hotstack-sonic/authorized_keys content: {get_param: controller_ssh_pub_key} diff --git a/images/dib/elements/hotstack-sonic-vs/static/etc/systemd/system/sonic.service b/images/dib/elements/hotstack-sonic-vs/static/etc/systemd/system/sonic.service index ea5ba580..3bfa3918 100644 --- a/images/dib/elements/hotstack-sonic-vs/static/etc/systemd/system/sonic.service +++ b/images/dib/elements/hotstack-sonic-vs/static/etc/systemd/system/sonic.service @@ -14,20 +14,27 @@ # under the License. [Unit] -Description=SONiC Container Switch -After=network-online.target sonic-import.service +Description=SONiC systemd-nspawn Container +After=network-online.target Wants=network-online.target -Requires=sonic-import.service ConditionPathExists=/usr/local/bin/start-sonic +ConditionPathExists=/var/lib/machines/sonic [Service] Type=oneshot RemainAfterExit=yes -ExecStartPre=-/usr/bin/podman stop -t 30 sonic -ExecStartPre=-/usr/bin/podman rm -f sonic -ExecStart=/usr/local/bin/start-sonic -ExecStop=/usr/bin/podman stop -t 30 sonic -ExecStopPost=/usr/bin/podman rm -f sonic +ExecStartPre=/usr/local/bin/start-sonic setup +ExecStart=/usr/bin/systemd-nspawn \ + --quiet \ + --keep-unit \ + --link-journal=try-guest \ + --network-namespace-path=/var/run/netns/sonic-ns \ + --directory=/var/lib/machines/sonic \ + --machine=sonic \ + --bind=/var/lib/sonic:/etc/sonic \ + --bind-ro=/etc/hotstack-sonic/authorized_keys:/home/admin/.ssh/authorized_keys \ + /usr/local/bin/supervisord -c /etc/supervisor/supervisord.conf +ExecStop=/usr/bin/machinectl poweroff sonic Restart=on-failure RestartSec=10 diff --git a/images/dib/elements/hotstack-sonic-vs/static/usr/local/bin/start-sonic b/images/dib/elements/hotstack-sonic-vs/static/usr/local/bin/start-sonic old mode 100755 new mode 100644 index 89d0ba67..c3656f31 --- a/images/dib/elements/hotstack-sonic-vs/static/usr/local/bin/start-sonic +++ b/images/dib/elements/hotstack-sonic-vs/static/usr/local/bin/start-sonic @@ -15,18 +15,18 @@ # under the License. """ -SONiC Container Startup Script +SONiC systemd-nspawn Setup Script -Manages the lifecycle of a SONiC containerized switch, including -container creation, network interface setup, and configuration deployment. -Designed to be invoked by systemd on boot. +Manages network namespace setup for SONiC running in systemd-nspawn. +Supports two modes: + - setup: Create network namespace and move interfaces (run before container starts) + - config: Wait for SONiC services and configure management interface (run after container starts) """ import json import logging import os import re -import shutil import subprocess import sys import time @@ -57,494 +57,233 @@ class SonicConfig: self.mgmt_interface = "eth0" self.switch_interface_start = "eth1" self.switch_interface_count = 5 - self.switch_hostname = "sonic" - self.sonic_image = "localhost/docker-sonic-vs:hotstack" def load_from_file(self, config_file: str = CONFIG_FILE) -> None: - """Load configuration from shell-style config file. + """Load configuration from shell-style config file.""" + if not os.path.exists(config_file): + LOG.warning(f"Config file {config_file} not found, using defaults") + return - :param config_file: Path to configuration file - :raises FileNotFoundError: If config file does not exist - """ with open(config_file, "r") as f: - for line in f: - line = line.strip() - if not line or line.startswith("#"): - continue - - if "=" in line: - key, value = line.split("=", 1) - key = key.strip() - value = value.strip().strip('"').strip("'") - - if key == "MGMT_INTERFACE": - self.mgmt_interface = value - elif key == "SWITCH_INTERFACE_START": - self.switch_interface_start = value - elif key == "SWITCH_INTERFACE_COUNT": - self.switch_interface_count = int(value) - elif key == "SWITCH_HOSTNAME": - self.switch_hostname = value - elif key == "SONIC_IMAGE": - self.sonic_image = value + content = f.read() + + for line in content.splitlines(): + line = line.strip() + if not line or line.startswith("#"): + continue + + if "=" not in line: + continue + + key, value = line.split("=", 1) + key = key.strip() + value = value.strip().strip('"').strip("'") + + if key == "MGMT_INTERFACE": + self.mgmt_interface = value + elif key == "SWITCH_INTERFACE_START": + self.switch_interface_start = value + elif key == "SWITCH_INTERFACE_COUNT": + self.switch_interface_count = int(value) def log_config(self) -> None: - """Log the current configuration to stderr.""" - LOG.info("Configuration loaded:") - LOG.info(f" Host Management Interface: {self.mgmt_interface}") + """Log the current configuration.""" + LOG.info("Configuration:") + LOG.info(f" Management Interface: {self.mgmt_interface}") LOG.info(f" Switch Interface Start: {self.switch_interface_start}") LOG.info(f" Switch Interface Count: {self.switch_interface_count}") - LOG.info(f" Switch Hostname: {self.switch_hostname}") - LOG.info(f" SONiC Image: {self.sonic_image}") - -def run_command( - cmd: list, check: bool = True, capture_output: bool = True -) -> subprocess.CompletedProcess: - """Run a shell command and return the result. - :param cmd: Command and arguments as a list - :param check: Whether to raise exception on non-zero exit - :param capture_output: Whether to capture stdout/stderr - :returns: CompletedProcess instance with command results - :raises subprocess.CalledProcessError: If command fails and check=True - """ +def run_command(cmd: list, check: bool = True) -> subprocess.CompletedProcess: + """Run a shell command and return the result.""" try: result = subprocess.run( - cmd, check=check, capture_output=capture_output, text=True + cmd, + check=check, + capture_output=True, + text=True, ) return result except subprocess.CalledProcessError as e: if check: LOG.error(f"Command failed: {' '.join(cmd)}") LOG.error(f"Exit code: {e.returncode}") + if e.stdout: + LOG.error(f"stdout: {e.stdout}") if e.stderr: - LOG.error(f"Error output: {e.stderr}") - raise + LOG.error(f"stderr: {e.stderr}") + raise + return e -def prepare_sonic_directory(config: SonicConfig) -> None: - """Prepare the SONiC directory with configuration files. +def run_in_container(container_pid: str, cmd: list, check: bool = True) -> subprocess.CompletedProcess: + """Run a command inside the container using nsenter.""" + nsenter_cmd = [ + "nsenter", "-t", container_pid, "-m", "-u", "-i", "-n", "-p" + ] + cmd + return run_command(nsenter_cmd, check=check) - Creates the SONiC directory structure and copies configuration. - :param config: SonicConfig instance with switch configuration - :raises FileNotFoundError: If no config is found - """ - LOG.info(f"Preparing SONiC directory: {SONIC_DIR}") +def parse_interface_name(ifname: str) -> Tuple[str, int]: + """Parse interface name into base and number.""" + match = re.match(r"([a-zA-Z]+)(\d+)", ifname) + if not match: + raise ValueError(f"Invalid interface name: {ifname}") + return match.group(1), int(match.group(2)) - os.makedirs(SONIC_DIR, exist_ok=True) - if os.path.exists(CONFIG_DB_FILE): - LOG.info(f"Using config_db.json from: {CONFIG_DB_FILE}") - shutil.copy(CONFIG_DB_FILE, SONIC_CONFIG_DB) - elif os.path.exists(DEFAULT_CONFIG_DB): - LOG.info("Using default config_db.json") - shutil.copy(DEFAULT_CONFIG_DB, SONIC_CONFIG_DB) - else: - LOG.warning("No config_db.json found, SONiC will use defaults") - - if os.path.exists(FRR_CONF_FILE): - LOG.info(f"Using frr.conf from: {FRR_CONF_FILE}") - shutil.copy(FRR_CONF_FILE, SONIC_FRR_CONF) - elif os.path.exists(DEFAULT_FRR_CONF): - LOG.info("Using default frr.conf") - shutil.copy(DEFAULT_FRR_CONF, SONIC_FRR_CONF) - else: - LOG.info("No frr.conf found, FRR will use defaults") +def create_network_namespace() -> None: + """Create persistent network namespace with loopback configured.""" + netns_path = "/var/run/netns/sonic-ns" - if os.path.exists(SONIC_VERSION_FILE): - LOG.info(f"Using sonic_version.yml from: {SONIC_VERSION_FILE}") - shutil.copy(SONIC_VERSION_FILE, SONIC_VERSION_YML) - elif os.path.exists(DEFAULT_SONIC_VERSION): - LOG.info("Using default sonic_version.yml") - shutil.copy(DEFAULT_SONIC_VERSION, SONIC_VERSION_YML) + if not os.path.exists(netns_path): + LOG.info("Creating persistent network namespace: sonic-ns") + run_command(["ip", "netns", "add", "sonic-ns"]) else: - LOG.error("No sonic_version.yml found - SONiC requires this file") - raise FileNotFoundError("sonic_version.yml is required but not found") + LOG.info("Network namespace sonic-ns already exists") - LOG.info("SONiC directory prepared successfully") - - -def get_container_pid() -> Optional[int]: - """Get the PID of the SONiC container. - - :returns: Container PID or None if container not found - """ + # Always ensure loopback is up (required for redis and inter-service communication) result = run_command( - ["podman", "inspect", "sonic", "--format", "{{.State.Pid}}"], check=False + ["ip", "netns", "exec", "sonic-ns", "ip", "link", "show", "lo"], + check=False ) + if "UP" not in result.stdout: + LOG.info("Bringing up loopback interface in sonic-ns...") + run_command(["ip", "netns", "exec", "sonic-ns", "ip", "link", "set", "lo", "up"]) - if result.returncode != 0: - return None - if not result.stdout.strip(): - return None +def move_interfaces_to_namespace(config: SonicConfig) -> None: + """Move and rename host interfaces into the persistent namespace. - try: - return int(result.stdout.strip()) - except ValueError: - LOG.warning(f"Could not parse container PID: {result.stdout.strip()}") - return None + Host eth1 -> Container eth0 (management) + Host eth2 -> Container eth1 (first data port) + Host eth3 -> Container eth2 (second data port) + etc. + Raises RuntimeError if a configured interface doesn't exist. + """ + LOG.info("Moving and renaming interfaces into sonic-ns namespace...") -def wait_for_container(timeout: int = 60) -> bool: - """Wait for the SONiC container to start. + if_base, if_start_num = parse_interface_name(config.switch_interface_start) - :param timeout: Maximum time to wait in seconds - :returns: True if container started successfully, False otherwise - """ - LOG.info(f"Waiting for container to start (timeout: {timeout}s)...") + for i in range(config.switch_interface_count): + host_if = f"{if_base}{if_start_num + i}" + container_if = f"eth{i}" # Rename: host eth1->container eth0, eth2->eth1, etc. - elapsed = 0 - while elapsed < timeout: + # Check if interface is already in the namespace (with new name) result = run_command( - [ - "podman", - "ps", - "--filter", - "name=sonic", - "--filter", - "status=running", - "--format", - "{{.Names}}", - ], - check=False, + ["ip", "netns", "exec", "sonic-ns", "ip", "link", "show", container_if], + check=False ) - if result.returncode == 0 and "sonic" in result.stdout: - LOG.info("Container started successfully") - return True - - time.sleep(2) - elapsed += 2 - - LOG.error(f"Container failed to start within {timeout} seconds") - return False - - -def parse_interface_name(interface: str) -> Tuple[str, int]: - """Parse interface name into base and number. - - :param interface: Interface name (e.g., 'eth2') - :returns: Tuple of (base, number) (e.g., ('eth', 2)) - :raises ValueError: If interface name format is invalid - """ - match = re.match(r"^([a-z]+)(\d+)$", interface) - if not match: - raise ValueError(f"Invalid interface name: {interface}") - return match.group(1), int(match.group(2)) - + if result.returncode == 0: + LOG.info(f"Interface {container_if} already in sonic-ns namespace") + continue -def attach_interface_to_container( - interface: str, container_pid: int, container_if_name: str -): - """Attach a network interface to the container namespace. + # Check if host interface exists + result = run_command(["ip", "link", "show", host_if], check=False) + if result.returncode != 0: + raise RuntimeError(f"Host interface {host_if} not found - check SWITCH_INTERFACE_START and SWITCH_INTERFACE_COUNT configuration") - Moves the interface into the container's network namespace and renames it. + LOG.info(f"Moving {host_if} into sonic-ns namespace as {container_if}...") + # Move to namespace + run_command(["ip", "link", "set", host_if, "netns", "sonic-ns"]) - :param interface: Host interface name to move - :param container_pid: PID of the target container - :param container_if_name: New name for the interface inside the container - """ - LOG.info(f"Attaching {interface} to container as {container_if_name}...") + # Rename inside the namespace + run_command( + ["ip", "netns", "exec", "sonic-ns", "ip", "link", "set", host_if, "name", container_if] + ) - run_command( - ["ip", "link", "set", interface, "netns", f"/proc/{container_pid}/ns/net"] - ) + # Bring interface up in the namespace + run_command( + ["ip", "netns", "exec", "sonic-ns", "ip", "link", "set", container_if, "up"] + ) - run_command( - [ - "nsenter", - "-t", - str(container_pid), - "-n", - "ip", - "link", - "set", - interface, - "name", - container_if_name, - ] - ) - run_command( - [ - "nsenter", - "-t", - str(container_pid), - "-n", - "ip", - "link", - "set", - container_if_name, - "up", - ] - ) +def setup_network_namespace(config: SonicConfig) -> bool: + """Setup persistent network namespace and move interfaces into it.""" + LOG.info("=" * 60) + LOG.info("SONiC Network Namespace Setup") + LOG.info("=" * 60) -def start_sonic_container(config: SonicConfig): - """Start the SONiC container. + try: + # Create namespace with loopback + create_network_namespace() - Creates and starts a privileged podman container with the SONiC image, - mounting the config directory and SSH authorized_keys. + # Move interfaces + move_interfaces_to_namespace(config) - :param config: SonicConfig instance with switch configuration - :returns: True if container started successfully, False otherwise - """ - LOG.info("Starting SONiC container...") - - cmd = [ - "podman", - "run", - "-d", - "--name", - "sonic", - "--hostname", - config.switch_hostname, - "--privileged", - "--network", - "none", - "-v", - f"{SONIC_DIR}:/etc/sonic:rw", - "-v", - f"{SONIC_FRR_CONF}:/etc/frr/frr.conf:rw", - ] - - # Mount SSH authorized_keys - required for SSH access to admin user - # The authorized_keys file must be provided via cloud-init at /etc/hotstack-sonic/authorized_keys - if not os.path.exists(AUTHORIZED_KEYS_FILE): - LOG.error(f"SSH authorized_keys not found at {AUTHORIZED_KEYS_FILE}") - LOG.error( - "SSH access is required but cannot be configured without authorized_keys" - ) - LOG.error("This file must be created via cloud-init write_files") + LOG.info("Network namespace setup complete") + return True + except Exception as e: + LOG.error(f"Failed to setup network namespace: {e}") return False - LOG.info("Mounting SSH authorized_keys for admin user") - cmd.extend( - [ - "-v", - f"{AUTHORIZED_KEYS_FILE}:/home/admin/.ssh/authorized_keys:ro", - ] - ) - - # Use the custom hotstack image with SSH and admin user configured - cmd.append("localhost/docker-sonic-vs:hotstack") - run_command(cmd) +def prepare_config_files(): + """Prepare SONiC configuration files.""" + os.makedirs(SONIC_DIR, exist_ok=True) - if not wait_for_container(60): - LOG.error("Container failed to start") - LOG.error("Container logs:") - # Try to get container logs, but don't fail if we can't - # (check=False means run_command won't raise) - result = run_command(["podman", "logs", "sonic"], check=False) - if result.returncode == 0: - if result.stdout: - for line in result.stdout.strip().split("\n"): - LOG.error(line) - if result.stderr: - for line in result.stderr.strip().split("\n"): - LOG.error(line) - else: - LOG.error("Could not retrieve container logs") +[ # Copy config_db.json + if not os.path.exists(CONFIG_DB_FILE): + LOG.error(f"Config file not found: {CONFIG_DB_FILE}") return False - LOG.info("SONiC container started successfully") - return True - - -def setup_networking(config: SonicConfig): - """Setup container networking by moving host interfaces directly. - - Moves host network interfaces directly into the container's network namespace - without using macvlan. This provides better performance and avoids macvlan - limitations. + LOG.info(f"Using config_db.json from {CONFIG_DB_FILE}") + with open(CONFIG_DB_FILE, "r") as src: + with open(SONIC_CONFIG_DB, "w") as dst: + dst.write(src.read()) - :param config: SonicConfig instance with network configuration - :returns: True if networking setup succeeded, False otherwise - """ - LOG.info("Setting up container networking (direct interface movement)...") - - container_pid = get_container_pid() - if not container_pid: - LOG.error("Failed to get container PID") + # Copy frr.conf + if not os.path.exists(FRR_CONF_FILE): + LOG.error(f"FRR config file not found: {FRR_CONF_FILE}") return False - LOG.info(f"Container PID: {container_pid}") - - # Move host interfaces directly into container namespace: - # Host eth1 -> Container eth0 (Management0) - # Host eth2 -> Container eth1 (Ethernet0) - # Host eth3 -> Container eth2 (Ethernet1) - # ... and so on - if_base, if_start_num = parse_interface_name(config.switch_interface_start) - - for i in range(config.switch_interface_count): - host_if = f"{if_base}{if_start_num + i}" - container_if = f"eth{i}" - - # Check if host interface exists (check=False so no exception raised) - result = run_command(["ip", "link", "show", host_if], check=False) - if result.returncode != 0: - LOG.warning(f"Host interface {host_if} not found, skipping") - continue + LOG.info(f"Using frr.conf from {FRR_CONF_FILE}") + with open(FRR_CONF_FILE, "r") as src: + with open(SONIC_FRR_CONF, "w") as dst: + dst.write(src.read()) - LOG.info(f"Moving {host_if} to container as {container_if}...") - attach_interface_to_container(host_if, container_pid, container_if) +] # Copy sonic_version.yml (optional) + if os.path.exists(SONIC_VERSION_FILE): + with open(SONIC_VERSION_FILE, "r") as src: + with open(SONIC_VERSION_YML, "w") as dst: + dst.write(src.read()) - LOG.info("Networking setup complete") return True -def read_config_db(): - """Read and parse config_db.json. - - :returns: Parsed config_db dictionary, or None if reading failed - """ - try: - with open(SONIC_CONFIG_DB, "r") as f: - return json.load(f) - except FileNotFoundError: - LOG.error(f"config_db.json not found at {SONIC_CONFIG_DB}") - LOG.error("This file should have been created by prepare_sonic_directory()") - return None - except (IOError, json.JSONDecodeError) as e: - LOG.error(f"Failed to read or parse config_db.json: {e}") - return None - - -def configure_management_interface(config_db): - """Configure management interface from config_db.json. - - In hardware SONiC, the hostcfgd daemon reads MGMT_INTERFACE from - config_db.json and applies it to eth0. However, hostcfgd requires: - - Full systemd (not available in containers using supervisord) - - sudo command (not needed when running as root in containers) - - System services like hostname-config, rsyslog-config (not in SONiC-VS) - - Since SONiC-VS containers use supervisord instead of systemd, hostcfgd cannot - run. All other config_db.json settings (INTERFACE, PORT, VLAN, VXLAN, etc.) - are properly applied by their respective daemons (intfmgrd, portmgrd, vlanmgrd). - Only MGMT_INTERFACE requires manual configuration in containerized environments. - - This is the standard approach for SONiC-VS - containerlab uses Docker's IPAM - to configure eth0, but we use --network none and move host interfaces, so we - must configure the management IP manually from config_db.json. - - :param config_db: Parsed config_db.json dictionary - :returns: True if configuration succeeded or was already applied, False otherwise - """ - LOG.info("Configuring management interface from config_db.json...") - - # Extract MGMT_INTERFACE configuration - # Format: "eth0|192.168.32.113/24": {"gwaddr": "192.168.32.1"} - mgmt_interfaces = config_db.get("MGMT_INTERFACE", {}) - for key, value in mgmt_interfaces.items(): - if "|" in key and key.startswith("eth0|"): - mgmt_ip = key.split("|")[1] - mgmt_gw = value.get("gwaddr") - break - - if not mgmt_ip or mgmt_ip == "0.0.0.0/0": - LOG.warning( - "No valid management interface configuration found in config_db.json" - ) - return False - - LOG.info(f"Found management interface config: {mgmt_ip}, gateway: {mgmt_gw}") +def wait_for_sonic_services(timeout: int = 120) -> bool: + """Wait for critical SONiC services to be running.""" + LOG.info("Waiting for SONiC services to start...") + required_services = ["redis-server", "syncd"] + poll_interval = 2 + elapsed = 0 - # Apply the IP address - LOG.info(f"Applying management IP {mgmt_ip} to eth0...") - result = run_command( - ["podman", "exec", "sonic", "ip", "addr", "add", mgmt_ip, "dev", "eth0"], - check=False, - ) + # Get container PID for nsenter + result = run_command(["machinectl", "show", "sonic", "-p", "Leader", "--value"], check=False) if result.returncode != 0: - LOG.error(f"Failed to add IP address to eth0 (exit code {result.returncode})") - if result.stderr: - LOG.error(f"Error: {result.stderr}") + LOG.error("Failed to get container PID") return False - # Apply the default gateway if configured - # Note: gwaddr can be "0.0.0.0" in default config, which means no gateway - if mgmt_gw and mgmt_gw != "0.0.0.0": - LOG.info(f"Adding default route via {mgmt_gw}...") - result = run_command( - [ - "podman", - "exec", - "sonic", - "ip", - "route", - "add", - "default", - "via", - mgmt_gw, - ], - check=False, - ) - if result.returncode != 0: - LOG.error(f"Failed to add default route (exit code {result.returncode})") - if result.stderr: - LOG.error(f"Error: {result.stderr}") - return False - else: - LOG.info("No gateway configured, skipping default route") - - LOG.info("Management interface configured successfully") - return True - - -def wait_for_sonic_services(timeout: int = 60) -> bool: - """Wait for critical SONiC services to be running. - - Polls supervisorctl status for specific services until redis-server and - orchagent are running, or until timeout is reached. These services are - required before we can configure the management interface. - - Note: We check each service individually because 'supervisorctl status' - (without arguments) returns exit code 3 when ANY service is STOPPED, - but 'supervisorctl status ' returns 0 when that specific service - is RUNNING. - - :param timeout: Maximum time to wait in seconds - :returns: True if services are running, False if timeout reached - """ - LOG.info("Waiting for SONiC services to initialize...") - - required_services = ["redis-server", "orchagent"] - elapsed = 0 - poll_interval = 2 + container_pid = result.stdout.strip() + if not container_pid: + LOG.error("Container PID is empty") + return False while elapsed < timeout: all_running = True - for service in required_services: - try: - result = run_command( - ["podman", "exec", "sonic", "supervisorctl", "status", service], - check=False, - ) - - # Exit code 0 and "RUNNING" in output means service is running - if result.returncode == 0 and "RUNNING" in result.stdout: - continue - else: - all_running = False - LOG.debug(f"Service {service} not ready yet") - break - - except (OSError, subprocess.SubprocessError) as e: - # OSError: podman command not found or permission denied - # SubprocessError: other subprocess-related errors - LOG.debug(f"Error checking {service} status: {e}") + result = run_in_container( + container_pid, + ["/usr/local/bin/supervisorctl", "status", service], + check=False + ) + if result.returncode != 0 or "RUNNING" not in result.stdout: all_running = False break if all_running: - LOG.info( - f"Required SONiC services are running: {', '.join(required_services)}" - ) + LOG.info(f"Required SONiC services are running: {', '.join(required_services)}") return True time.sleep(poll_interval) @@ -554,100 +293,96 @@ def wait_for_sonic_services(timeout: int = 60) -> bool: return False -def start_bgpd() -> bool: - """Start BGP daemon for routing protocols. - - bgpd is not started by default in SONiC-VS and must be started explicitly. - This follows the same approach as containerlab's sonic-vs implementation. - The daemon reads its configuration from /etc/frr/frr.conf which is mounted - at container startup. - - zebra (the FRR routing manager) starts automatically with supervisord and - does not need to be started manually. +def configure_management_interface(config_db: dict, container_pid: str) -> bool: + """Configure management interface from config_db.json.""" + try: + mgmt_iface_config = config_db.get("MGMT_INTERFACE", {}) + if not mgmt_iface_config: + LOG.warning("No MGMT_INTERFACE configuration found in config_db.json") + return True - :returns: True if bgpd started successfully, False otherwise - """ - LOG.info("Starting BGP daemon...") + for key, value in mgmt_iface_config.items(): + if "|" not in key: + continue - result = run_command( - ["podman", "exec", "sonic", "supervisorctl", "start", "bgpd"], - check=False, - ) + iface, ip_cidr = key.split("|", 1) + gateway = value.get("gwaddr") - if result.returncode != 0: - LOG.error(f"Failed to start bgpd (exit code {result.returncode})") - if result.stderr: - LOG.error(f"Error: {result.stderr}") - return False + LOG.info(f"Configuring management interface {iface}: {ip_cidr}") + if gateway: + LOG.info(f" Gateway: {gateway}") - LOG.info("BGP daemon started successfully") - return True + # Configure inside the container + run_in_container( + container_pid, + ["/usr/bin/ip", "addr", "add", ip_cidr, "dev", iface] + ) + run_in_container( + container_pid, + ["/usr/bin/ip", "link", "set", iface, "up"] + ) -def main(): - """Main entry point for SONiC switch host setup. + if gateway: + run_in_container( + container_pid, + ["/usr/bin/ip", "route", "add", "default", "via", gateway] + ) - Orchestrates the complete setup process: loads configuration, prepares - SONiC directory, starts container, and configures networking. + return True + except Exception as e: + LOG.error(f"Error configuring management interface: {e}") + return False - :returns: Exit code (0 for success, 1 for failure) - """ - LOG.info("Starting SONiC switch host setup...") - try: +def main(): + """Main entry point.""" + if len(sys.argv) > 1 and sys.argv[1] == "setup": + # Setup mode: Create namespace and move interfaces + LOG.info("Running in SETUP mode") config = SonicConfig() config.load_from_file() config.log_config() - except (FileNotFoundError, ValueError) as e: - LOG.error(f"Failed to load configuration: {e}") - return 1 - try: - prepare_sonic_directory(config) - except (FileNotFoundError, RuntimeError, OSError) as e: - LOG.error(f"Failed to prepare SONiC directory: {e}") - return 1 - - if not start_sonic_container(config): - LOG.error("Failed to start SONiC container") - return 1 - - if not setup_networking(config): - LOG.error("Failed to setup networking") - return 1 - - # Wait for critical SONiC services to be ready - if not wait_for_sonic_services(timeout=60): - LOG.error("Critical SONiC services (redis-server, orchagent) did not start") - LOG.error("Cannot configure management interface without these services") - return 1 - - # Read config_db.json - config_db = read_config_db() - if config_db is None: - LOG.error("Failed to read config_db.json") - return 1 - - # Configure management interface from config_db.json - if not configure_management_interface(config_db): - LOG.error("Failed to configure management interface") - return 1 - - # Start BGP daemon (not started by default in SONiC-VS) - if not start_bgpd(): - LOG.error("Failed to start BGP daemon") - return 1 - - LOG.info("SONiC switch host setup complete") - LOG.info("Container status:") - # Show container status (check=False so no exception raised) - result = run_command(["podman", "ps", "--filter", "name=sonic"], check=False) - if result.returncode == 0 and result.stdout: - for line in result.stdout.strip().split("\n"): - LOG.info(line) - - LOG.info("To access the switch CLI, run: podman exec -it sonic bash") - return 0 + if not prepare_config_files(): + return 1 + + if not setup_network_namespace(config): + return 1 + + return 0 + + else: + # Config mode: Wait for services and configure management + LOG.info("Running in CONFIG mode") + LOG.info("=" * 60) + + if not wait_for_sonic_services(timeout=120): + LOG.error("SONiC services did not start") + return 1 + + # Get container PID + result = run_command(["machinectl", "show", "sonic", "-p", "Leader", "--value"], check=False) + if result.returncode != 0: + LOG.error("Failed to get container PID") + return 1 + container_pid = result.stdout.strip() + + # Read config_db.json + try: + with open(SONIC_CONFIG_DB, "r") as f: + config_db = json.load(f) + except Exception as e: + LOG.error(f"Failed to read config_db.json: {e}") + return 1 + + if not configure_management_interface(config_db, container_pid): + LOG.error("Failed to configure management interface") + return 1 + + LOG.info("SONiC configuration complete") + LOG.info(f"To access the switch, run: nsenter -t {container_pid} -m -u -i -n -p /bin/bash") + return 0 if __name__ == "__main__": diff --git a/images/dib/elements/hotstack-sonic-vs/static/var/lib/sonic/Containerfile b/images/dib/elements/hotstack-sonic-vs/static/var/lib/sonic/Containerfile deleted file mode 100644 index 6efde036..00000000 --- a/images/dib/elements/hotstack-sonic-vs/static/var/lib/sonic/Containerfile +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright Red Hat, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -# Customize the base SONiC-VS image with SSH access and admin user -# This Containerfile is used to build a custom image from the upstream SONiC-VS base image - -ARG BASE_IMAGE=localhost/docker-sonic-vs:latest -FROM ${BASE_IMAGE} - -# Install sudo, create admin user, and configure SSH in a single layer -RUN apt-get update && \ - apt-get install -y sudo && \ - rm -rf /var/lib/apt/lists/* && \ - useradd -m -s /bin/bash -G sudo,redis,frrvty admin && \ - echo "admin ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/admin && \ - chmod 0440 /etc/sudoers.d/admin && \ - mkdir -p /home/admin/.ssh && \ - chown admin:admin /home/admin/.ssh && \ - chmod 700 /home/admin/.ssh && \ - ssh-keygen -A && \ - mkdir -p /run/sshd - -# Add supervisord configuration for sshd -COPY sshd.conf /etc/supervisor/conf.d/sshd.conf - -# Metadata -LABEL maintainer="hotstack" -LABEL description="SONiC-VS with SSH access and admin user configured" diff --git a/images/dib/elements/hotstack-sonic-vs/static/var/lib/sonic/README.md b/images/dib/elements/hotstack-sonic-vs/static/var/lib/sonic/README.md deleted file mode 100644 index bc34ec39..00000000 --- a/images/dib/elements/hotstack-sonic-vs/static/var/lib/sonic/README.md +++ /dev/null @@ -1,53 +0,0 @@ -# SONiC Custom Image Build - -This directory contains files for building a custom SONiC-VS container image with SSH access and admin user pre-configured. - -## Files - -- **Containerfile**: Builds the custom image from the base SONiC-VS image -- **sshd.conf**: Supervisord configuration for the SSH daemon - -## What the Custom Image Includes - -The custom image (`localhost/docker-sonic-vs:hotstack`) is built on top of the upstream SONiC-VS base image and adds: - -1. **sudo package** - Required for SONiC CLI commands -2. **admin user** - Pre-created with proper groups (sudo, redis, frrvty) -3. **Passwordless sudo** - Admin user can run sudo commands without password -4. **SSH host keys** - Pre-generated for SSH access -5. **SSH daemon** - Configured in supervisord and starts automatically -6. **.ssh directory** - Pre-created for admin user (authorized_keys mounted at runtime) - -## Build Process - -The custom image is built automatically by the `sonic-import.service` systemd service on first boot: - -1. Base SONiC-VS image is loaded from `/var/lib/sonic/sonic-image.tar.gz` -2. Custom image is built using the Containerfile in this directory -3. Result is tagged as `localhost/docker-sonic-vs:hotstack` - -## SSH Access - -SSH access is enabled through: - -1. **Image build time**: Admin user, sudo, SSH daemon, and host keys are configured -2. **Container runtime**: Host's `/root/.ssh/authorized_keys` is mounted into the container - -This allows SSH access using: `ssh admin@` - -## Admin User Permissions - -The admin user has the following capabilities: - -- **sudo access**: Can run any command with sudo (passwordless) -- **SONiC CLI**: Can run `show` and `config` commands -- **FRR CLI**: Can run `vtysh` commands (member of frrvty group) -- **Redis access**: Can run `redis-cli` commands (member of redis group) - -## Customization - -To modify the custom image: - -1. Edit the `Containerfile` or `sshd.conf` in this directory -2. Remove `/var/lib/sonic/.image-imported` to force rebuild on next boot -3. Restart the system or run: `systemctl restart sonic-import.service sonic.service` diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/README.md b/scenarios/networking-lab/devstack-sonic-vxlan/README.md new file mode 100644 index 00000000..887bda17 --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/README.md @@ -0,0 +1,46 @@ +# Devstack with SONiC VXLAN Spine-and-Leaf + +Spine-and-leaf topology with 4 SONiC switches, 1 Devstack node, 2 Ironic nodes, and 1 controller. + +## Topology + +![Topology Diagram](topology-diagram.svg) + +## Networks + +### Management (`192.168.32.0/24`) +- Controller: `192.168.32.254` +- Spine01 (host): `192.168.32.11`, (switch): `192.168.32.111` +- Spine02 (host): `192.168.32.12`, (switch): `192.168.32.112` +- Leaf01 (host): `192.168.32.13`, (switch): `192.168.32.113` +- Leaf02 (host): `192.168.32.14`, (switch): `192.168.32.114` +- Devstack: `192.168.32.20` + +### Inter-Switch Links (`10.1.1.0/24`) +- Spine01 ↔ Spine02: `10.1.1.0/30` +- Leaf01 ↔ Spine01: `10.1.1.4/30` +- Leaf01 ↔ Spine02: `10.1.1.8/30` +- Leaf02 ↔ Spine01: `10.1.1.12/30` +- Leaf02 ↔ Spine02: `10.1.1.16/30` + +### Loopback/VTEP Addresses +- Spine01: `10.255.255.1/32` +- Spine02: `10.255.255.2/32` +- Leaf01: `10.255.255.3/32` (VTEP) +- Leaf02: `10.255.255.4/32` (VTEP) + +### BGP EVPN +- AS 65001 iBGP +- Spines: Route reflectors +- Leafs: Route reflector clients, vtep VTEP (source Loopback0) +- ML2 dynamically manages VLANs/VNIs + +## Deployment + +```bash +ansible-playbook -e @scenarios/networking-lab/devstack-sonic-vxlan/bootstrap_vars.yml -e os_cloud= bootstrap_devstack.yml +``` + +## Troubleshooting + +See [TROUBLESHOOTING.md](TROUBLESHOOTING.md). diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/TROUBLESHOOTING.md b/scenarios/networking-lab/devstack-sonic-vxlan/TROUBLESHOOTING.md new file mode 100644 index 00000000..a43e7bbe --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/TROUBLESHOOTING.md @@ -0,0 +1,170 @@ +# Troubleshooting + +## Important Notes on SONiC CLI + +SONiC uses a combination of standard Linux commands and FRR's vtysh for configuration and monitoring: + +- **Standard Linux commands** work directly in the container: `ip addr`, `ip route`, `ip link`, `ping`, etc. +- **FRR commands** require entering vtysh first: `podman exec -it sonic vtysh`, then use `show ip ospf`, `show bgp summary`, etc. +- **SONiC CLI commands** (`show interfaces status`, etc.) may not work in all SONiC versions or require specific setup +- When in doubt, use standard Linux commands (`ip addr show`, `ip route show`) instead of SONiC-specific CLI + +## Switches Not Booting +- Check OpenStack console logs for the switch instances +- Verify the `hotstack-sonic` image is properly configured +- Check cloud-init logs: `sudo journalctl -u cloud-init` +- Verify the SONiC container is running: `sudo systemctl status sonic.service` + +## Switches Not Reachable +- Check that host interface has IP: `ip addr show eth0` (on host, not in container) +- Verify management interface inside container: `podman exec -it sonic ip addr show eth0` +- Check all interfaces in container: `podman exec -it sonic ip addr show` +- Verify config_db.json has correct MGMT_INTERFACE: `cat /var/lib/sonic/config_db.json` +- Check DNS resolution from controller: `dig spine01.stack.lab @192.168.32.254` +- Restart SONiC service if config changed: `sudo systemctl restart sonic.service` + +### Common Issue: Management Interface Not Configured + +**Symptom:** `eth0` inside the SONiC container has no IP address (only link-local IPv6), making the switch unreachable at its management IP (e.g., `192.168.32.113` for leaf01). + +**Root Cause:** SONiC-VS containers use supervisord instead of systemd. The `hostcfgd` daemon (responsible for applying MGMT_INTERFACE configuration) requires: +- Full systemd (not available in containers) +- sudo command (not needed in containers running as root) +- System services that don't exist in SONiC-VS + +All other config_db.json settings (INTERFACE, PORT, VLAN, VXLAN) are properly applied by their respective daemons (intfmgrd, portmgrd, vlanmgrd). Only MGMT_INTERFACE requires manual configuration in SONiC-VS containers. + +**Note:** The host interface (e.g., `192.168.32.13` for leaf01-host) will work fine - this issue only affects the switch management IP inside the container. + +**Diagnosis:** +- Check SONiC logs: `sudo podman logs sonic 2>&1 | tail -50` +- Verify config_db.json exists: `ls -l /var/lib/sonic/config_db.json` +- Check if database service is running: `podman exec -it sonic supervisorctl status` +- View SONiC configuration database: `podman exec -it sonic redis-cli -n 4 hgetall "MGMT_INTERFACE|eth0|192.168.32.113/24"` +- Check for config errors in syslog: `podman exec -it sonic cat /var/log/syslog | grep -i error` + +**Resolution:** +1. Verify the config_db.json has the correct MGMT_INTERFACE entry: + ```bash + cat /var/lib/sonic/config_db.json | grep -A 5 MGMT_INTERFACE + ``` + Should show: `"eth0|192.168.32.XXX/24": { "gwaddr": "192.168.32.1" }` + +2. Check if the interface mapping is correct in the container: + ```bash + # On host: eth1 should have MAC matching switch-mgmt-port + ip link show eth1 + # In container: eth0 should have the same MAC + podman exec -it sonic ip link show eth0 + ``` + +3. Verify configuration is loaded into Redis: + ```bash + # Check if config_db.json was loaded into Redis ConfigDB (database 4) + podman exec -it sonic redis-cli -n 4 KEYS "*MGMT*" + + # Should show: MGMT_INTERFACE|eth0|, MGMT_PORT|eth0, MGMT_VRF_CONFIG|vrf_global + + # Check the actual values + podman exec -it sonic redis-cli -n 4 HGETALL "MGMT_INTERFACE|eth0|192.168.32.113/24" + # Should show: gwaddr and gateway IP + ``` + +4. Understand why hostcfgd doesn't run: + ```bash + # hostcfgd exists but can't run in containers + podman exec -it sonic which hostcfgd + # Returns: /usr/local/bin/hostcfgd + + # It's not in supervisord (uses systemd instead) + podman exec -it sonic supervisorctl status | grep hostcfgd + # Returns: (nothing - not configured) + + # Trying to run it manually fails due to missing sudo/systemd + podman exec -it sonic /usr/local/bin/hostcfgd + # Fails with: FileNotFoundError: [Errno 2] No such file or directory: 'sudo' + ``` + +5. Manually configure the IP as a workaround: + ```bash + # For leaf01 (adjust IP for other switches) + podman exec -it sonic ip addr add 192.168.32.113/24 dev eth0 + podman exec -it sonic ip route add default via 192.168.32.1 + ``` + +6. If the issue persists, check container logs and restart: + ```bash + sudo podman logs sonic 2>&1 | tail -100 + sudo systemctl restart sonic.service + ``` + +## OSPF Not Working +- Check OSPF status: `vtysh -c "show ip ospf"` +- Check OSPF neighbors: `vtysh -c "show ip ospf neighbor"` +- Check OSPF interfaces: `vtysh -c "show ip ospf interface"` +- Or access FRR shell interactively: `vtysh` (then run show commands) +- Check interface IPs: `ip addr show` +- Check routing table: `ip route show` +- Verify FRR is running: `sudo supervisorctl status | grep -E 'zebra|ospfd'` + +## BGP EVPN Not Working +- Check BGP status: `vtysh -c "show bgp summary"` +- Check BGP EVPN neighbors: `vtysh -c "show bgp l2vpn evpn summary"` +- Check BGP routes: `vtysh -c "show bgp l2vpn evpn route"` +- Check BGP configuration: `vtysh -c "show running-config"` +- Or access FRR shell interactively: `vtysh` (then run show commands) +- Exit vtysh and verify loopback interface: `podman exec -it sonic ip addr show lo` +- Test loopback reachability: `podman exec -it sonic ping -c 3 10.255.255.1` +- Check FRR logs: `podman exec -it sonic cat /var/log/frr/bgpd.log` + +## Devstack Deployment Issues +- Check network connectivity on trunk0: `ip link show trunk0` +- Verify trunk0 is added to br-ex: `sudo ovs-vsctl show` +- Review devstack logs: `/opt/stack/logs/stack.sh.log` +- Check neutron-server logs: `sudo journalctl -u devstack@q-svc` + +## ML2 Not Configuring Switches +- Verify networking-generic-switch credentials in `/etc/neutron/plugins/ml2/ml2_conf_genericswitch.ini` +- Check neutron-server can reach switches: `ping 192.168.32.13` from devstack +- Review neutron-server logs for genericswitch errors: `sudo journalctl -u devstack@q-svc | grep genericswitch` +- Test SSH connectivity manually: `ssh admin@192.168.32.13` from devstack +- Test SSH connectivity manually: `ssh admin@192.168.32.13` from devstack + +## Container-Specific Issues +- Check SONiC container status: `sudo podman ps` +- View container logs: `sudo podman logs sonic` +- Restart SONiC service: `sudo systemctl restart sonic.service` +- Verify SONiC image is loaded: `sudo podman images | grep sonic` +- Access SONiC CLI: `sudo podman exec -it sonic bash` + +## Understanding the Switch Architecture + +Each SONiC switch instance has TWO network interfaces and TWO IP addresses: + +1. **Host Interface (eth0 on host)** + - Example: leaf01-host at `192.168.32.13` + - This is the VM's management interface + - Used for SSH access to the host OS + - Always configured by cloud-init/DHCP + +2. **Switch Management Interface (eth1 on host → eth0 in container)** + - Example: leaf01 switch at `192.168.32.113` + - This is the SONiC container's management interface + - Used for switch management (SSH, NETCONF, etc.) + - **Configured from config_db.json by the start-sonic script** + +### Why Manual Configuration is Necessary + +**Hardware SONiC vs SONiC-VS Containers:** + +| Component | Hardware SONiC | SONiC-VS Container | Impact | +|-----------|---------------|-------------------|---------| +| Init system | systemd | supervisord | hostcfgd can't run | +| hostcfgd daemon | ✓ Runs | ✗ Not configured | MGMT_INTERFACE not applied | +| sudo command | ✓ Available | ✗ Not needed | hostcfgd fails to start | +| Other daemons | ✓ Running | ✓ Running | Other config works fine | + +The `start-sonic` script reads MGMT_INTERFACE from config_db.json and applies it using standard Linux `ip` commands. This is the standard approach for SONiC-VS containers: +- Containerlab uses Docker's IPAM to configure eth0 +- We use `--network none` and move host interfaces, so we configure it from config_db.json +- All other configuration (INTERFACE, PORT, VLAN, VXLAN) works through existing daemons diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/automation-vars.yml b/scenarios/networking-lab/devstack-sonic-vxlan/automation-vars.yml new file mode 100644 index 00000000..042a098e --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/automation-vars.yml @@ -0,0 +1,117 @@ +--- +# Networking lab automation stages + +stages: + - name: Configure provisioning network route + documentation: >- + Configures a static route on the devstack node to enable communication + between the ironic-conductor and the provisioning network (where + ironic-python-agent runs during node deployment/cleaning). This retrieves + the router gateway IP from Neutron and adds a route via that gateway. + The provisioning network subnet is configured in local.conf.j2 via + IRONIC_PROVISION_SUBNET_PREFIX (default: 10.0.5.0/24). + shell: | + set -xe -o pipefail + + IRONIC_PROVISION_SUBNET_PREFIX="10.0.5.0/24" + + EXTERNAL_GW_INFO=$(openstack --os-cloud devstack-admin router show router1 -c external_gateway_info -f json) + + ROUTER_GW_IP=$(echo "$EXTERNAL_GW_INFO" | python3 -c ' + import sys, json, ipaddress + data = json.load(sys.stdin) + for ip_info in data["external_gateway_info"]["external_fixed_ips"]: + addr = ipaddress.ip_address(ip_info["ip_address"]) + if addr.version == 4: + print(ip_info["ip_address"]) + break + ') + + if [ -z "$ROUTER_GW_IP" ]; then + echo "ERROR: Could not determine router gateway IP" + exit 1 + fi + + ssh -o StrictHostKeyChecking=no stack@devstack.stack.lab " + ROUTES=\$(ip -j r) + ROUTE_EXISTS=\$(echo \"\$ROUTES\" | python3 -c ' + import sys, json + routes = json.load(sys.stdin) + target_dst = \"$IRONIC_PROVISION_SUBNET_PREFIX\" + target_gw = \"$ROUTER_GW_IP\" + for route in routes: + if route.get(\"dst\") == target_dst and route.get(\"gateway\") == target_gw: + print(\"exists\") + break + ') + if [ -z \"\$ROUTE_EXISTS\" ]; then + echo \"Adding route: $IRONIC_PROVISION_SUBNET_PREFIX via $ROUTER_GW_IP\" + sudo ip route add $IRONIC_PROVISION_SUBNET_PREFIX via $ROUTER_GW_IP + else + echo \"Route already exists: $IRONIC_PROVISION_SUBNET_PREFIX via $ROUTER_GW_IP\" + fi + " + + - name: Enroll nodes in devstack ironic + documentation: >- + Registers physical baremetal nodes with the Ironic service in the DevStack + deployment using the node definitions from ironic_nodes.yaml. This creates + Ironic node records with BMC access credentials, hardware profiles, and port + configurations for networking-generic-switch integration. + shell: | + set -xe -o pipefail + + NODES_FILE=/home/zuul/data/ironic_nodes.yaml + + # Enroll the nodes + openstack --os-cloud devstack-admin baremetal create "$NODES_FILE" + + echo "Nodes enrolled successfully" + openstack --os-cloud devstack-admin baremetal node list + + - name: Wait for ironic nodes to reach enroll state + documentation: >- + Monitors node state transition to 'enroll' status, indicating that Ironic + has successfully registered the nodes and validated basic BMC connectivity. + This is the first state in the baremetal provisioning lifecycle. + shell: | + set -xe -o pipefail + + counter=0 + max_retries=60 + sleep_interval=5 + + echo "Waiting for all nodes to reach 'enroll' state..." + + until ! openstack --os-cloud devstack-admin baremetal node list -f value -c "Provisioning State" | grep -v "enroll"; do + ((counter++)) + if (( counter > max_retries )); then + echo "ERROR: Timeout waiting for nodes to reach enroll state" + openstack --os-cloud devstack-admin baremetal node list + exit 1 + fi + echo "Attempt $counter/$max_retries - waiting ${sleep_interval}s..." + sleep ${sleep_interval} + done + + echo "All nodes successfully reached enroll state" + openstack --os-cloud devstack-admin baremetal node list + + - name: Manage nodes + documentation: >- + Transitions nodes from 'enroll' to 'manageable' state. This validates + basic hardware connectivity and prepares nodes for further operations. + shell: | + set -x -o pipefail + + # Get list of node UUIDs + node_uuids=$(openstack --os-cloud devstack-admin baremetal node list -f value -c UUID) + + # Manage each node with --wait (300 second timeout) + for uuid in $node_uuids; do + echo "Managing node: $uuid" + openstack --os-cloud devstack-admin baremetal node manage --wait 300 $uuid + done + + echo "All nodes successfully reached manageable state" + openstack --os-cloud devstack-admin baremetal node list diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/bootstrap_vars.yml b/scenarios/networking-lab/devstack-sonic-vxlan/bootstrap_vars.yml new file mode 100644 index 00000000..79b7f279 --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/bootstrap_vars.yml @@ -0,0 +1,50 @@ +--- +# Bootstrap configuration for networking lab scenario with SONiC + +# OpenStack cloud configuration +os_cloud: default +os_floating_network: public +os_router_external_network: public + +# Scenario configuration +scenario: devstack-sonic-vxlan +scenario_dir: scenarios/networking-lab +stack_template_path: "{{ scenario_dir }}/{{ scenario }}/heat_template.yaml" +automation_vars_file: "{{ scenario_dir }}/{{ scenario }}/automation-vars.yml" + +# DNS and NTP +ntp_servers: [] +dns_servers: + - 8.8.8.8 + - 8.8.4.4 + +# Pull secret for container images (if needed) +# pull_secret_file: ~/pull-secret.txt + +# Stack naming +stack_name: "hs-{{ scenario | replace('/', '-') }}-{{ zuul.build[:8] | default('no-zuul') }}" + +# Stack parameters +stack_parameters: + dns_servers: "{{ dns_servers }}" + ntp_servers: "{{ ntp_servers }}" + controller_ssh_pub_key: "{{ controller_ssh_pub_key | default('') }}" + dataplane_ssh_pub_key: "{{ dataplane_ssh_pub_key | default('') }}" + router_external_network: "{{ os_router_external_network | default('public') }}" + floating_ip_network: "{{ os_floating_network | default('public') }}" + controller_params: + image: hotstack-controller + flavor: hotstack.small + devstack_params: + image: ubuntu-noble-server + flavor: hotstack.xxlarge + switch_params: + image: hotstack-sonic + flavor: hotstack.small + ironic_params: + image: CentOS-Stream-GenericCloud-9 + cd_image: sushy-tools-blank-image + flavor: hotstack.medium + +# Controller role configuration +controller_install_openstack_client: true diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/heat_template.yaml b/scenarios/networking-lab/devstack-sonic-vxlan/heat_template.yaml new file mode 100644 index 00000000..5c6ff887 --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/heat_template.yaml @@ -0,0 +1,1248 @@ +--- +heat_template_version: rocky + +description: > + Heat template for networking lab with spine-and-leaf SONiC setup. + Includes 4 switches (spine01, spine02, leaf01, leaf02), 1 devstack node, and 2 ironic nodes. + +parameters: + dns_servers: + type: comma_delimited_list + default: + - 8.8.8.8 + - 8.8.4.4 + ntp_servers: + type: comma_delimited_list + default: [] + controller_ssh_pub_key: + type: string + dataplane_ssh_pub_key: + type: string + router_external_network: + type: string + default: public + floating_ip_network: + type: string + default: public + net_value_specs: + type: json + default: {} + + controller_params: + type: json + default: + image: hotstack-controller + flavor: hotstack.small + devstack_params: + type: json + default: + image: ubuntu-noble-server + flavor: hotstack.large + ironic_params: + type: json + default: + image: CentOS-Stream-GenericCloud-9 + cd_image: sushy-tools-blank-image + flavor: hotstack.medium + switch_params: + type: json + default: + image: hotstack-sonic + flavor: hotstack.small + cdrom_disk_bus: + type: string + description: > + Disk bus type for CDROM device. 'sata' may be required for older versions + of OpenStack. Heat patch https://review.opendev.org/c/openstack/heat/+/966688 + is needed for 'sata' support. + default: scsi + constraints: + - allowed_values: + - sata + - scsi + +resources: + # + # Networks + # + machine-net: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + # Spine switch interconnect + spine-link-net: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + # Leaf to spine links + leaf01-spine01-net: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + leaf01-spine02-net: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + leaf02-spine01-net: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + leaf02-spine02-net: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + # Simple bridge networks for server attachments + # These are just L2 connectivity - VLANs and configuration managed by ML2 + devstack-br-net: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + ironic0-br-net: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + ironic1-br-net: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + # Trunk network for leaf01 switch + leaf01-trunk-net: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + # Shared VLAN networks for physical network connectivity + public-vlan100: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + tenant-vlan103: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + tenant-vlan104: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + tenant-vlan105: + type: OS::Neutron::Net + properties: + port_security_enabled: false + value_specs: {get_param: net_value_specs} + + # + # Subnets + # + machine-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: machine-net} + ip_version: 4 + cidr: 192.168.32.0/24 + enable_dhcp: true + dns_nameservers: + - 192.168.32.254 + + spine-link-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: spine-link-net} + ip_version: 4 + cidr: 10.1.1.0/30 + enable_dhcp: false + gateway_ip: null + + leaf01-spine01-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: leaf01-spine01-net} + ip_version: 4 + cidr: 10.1.1.4/30 + enable_dhcp: false + gateway_ip: null + + leaf01-spine02-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: leaf01-spine02-net} + ip_version: 4 + cidr: 10.1.1.8/30 + enable_dhcp: false + gateway_ip: null + + leaf02-spine01-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: leaf02-spine01-net} + ip_version: 4 + cidr: 10.1.1.12/30 + enable_dhcp: false + gateway_ip: null + + leaf02-spine02-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: leaf02-spine02-net} + ip_version: 4 + cidr: 10.1.1.16/30 + enable_dhcp: false + gateway_ip: null + + devstack-br-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: devstack-br-net} + ip_version: 4 + cidr: 172.20.10.0/29 + enable_dhcp: false + gateway_ip: null + + ironic0-br-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: ironic0-br-net} + ip_version: 4 + cidr: 172.20.11.0/29 + enable_dhcp: false + gateway_ip: null + + ironic1-br-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: ironic1-br-net} + ip_version: 4 + cidr: 172.20.12.0/29 + enable_dhcp: false + gateway_ip: null + + # Leaf01 trunk subnet + leaf01-trunk-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: leaf01-trunk-net} + ip_version: 4 + cidr: 172.20.20.0/24 + enable_dhcp: false + gateway_ip: null + + # Shared VLAN subnets + public-vlan100-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: public-vlan100} + ip_version: 4 + cidr: 172.20.0.0/24 + gateway_ip: 172.20.0.1 + enable_dhcp: false + + tenant-vlan103-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: tenant-vlan103} + ip_version: 4 + cidr: 172.20.3.0/24 + gateway_ip: null + enable_dhcp: false + + tenant-vlan104-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: tenant-vlan104} + ip_version: 4 + cidr: 172.20.4.0/24 + gateway_ip: null + enable_dhcp: false + + tenant-vlan105-subnet: + type: OS::Neutron::Subnet + properties: + network: {get_resource: tenant-vlan105} + ip_version: 4 + cidr: 172.20.5.0/24 + gateway_ip: null + enable_dhcp: false + + # + # Routers + # + router: + type: OS::Neutron::Router + properties: + admin_state_up: true + external_gateway_info: + network: {get_param: router_external_network} + + machine-net-router-interface: + type: OS::Neutron::RouterInterface + properties: + router: {get_resource: router} + subnet: {get_resource: machine-subnet} + + # + # Controller Instance + # + controller_users: + type: OS::Heat::CloudConfig + properties: + cloud_config: + users: + - default + - name: zuul + gecos: "Zuul user" + sudo: ALL=(ALL) NOPASSWD:ALL + ssh_authorized_keys: + - {get_param: controller_ssh_pub_key} + + controller-write-files: + type: OS::Heat::CloudConfig + properties: + cloud_config: + write_files: + - path: /etc/dnsmasq.conf + content: | + # dnsmasq service config + # Include all files in /etc/dnsmasq.d except RPM backup files + conf-dir=/etc/dnsmasq.d,.rpmnew,.rpmsave,.rpmorig + no-resolv + owner: root:dnsmasq + - path: /etc/dnsmasq.d/forwarders.conf + content: + str_replace: + template: | + # DNS forwarders records + server=$dns1 + server=$dns2 + params: + $dns1: {get_param: [dns_servers, 0]} + $dns2: {get_param: [dns_servers, 1]} + owner: root:dnsmasq + - path: /etc/dnsmasq.d/host_records.conf + content: + str_replace: + template: | + # Host records + host-record=controller-0.stack.lab,$controller0 + host-record=spine01-host.stack.lab,$spine01_host + host-record=spine01.stack.lab,$spine01 + host-record=spine02-host.stack.lab,$spine02_host + host-record=spine02.stack.lab,$spine02 + host-record=leaf01-host.stack.lab,$leaf01_host + host-record=leaf01.stack.lab,$leaf01 + host-record=leaf02-host.stack.lab,$leaf02_host + host-record=leaf02.stack.lab,$leaf02 + host-record=devstack.stack.lab,$devstack + params: + $controller0: {get_attr: [controller-machine-port, fixed_ips, 0, ip_address]} + $spine01_host: {get_attr: [spine01-machine-port, fixed_ips, 0, ip_address]} + $spine01: {get_attr: [spine01-switch-mgmt-port, fixed_ips, 0, ip_address]} + $spine02_host: {get_attr: [spine02-machine-port, fixed_ips, 0, ip_address]} + $spine02: {get_attr: [spine02-switch-mgmt-port, fixed_ips, 0, ip_address]} + $leaf01_host: {get_attr: [leaf01-machine-port, fixed_ips, 0, ip_address]} + $leaf01: {get_attr: [leaf01-switch-mgmt-port, fixed_ips, 0, ip_address]} + $leaf02_host: {get_attr: [leaf02-machine-port, fixed_ips, 0, ip_address]} + $leaf02: {get_attr: [leaf02-switch-mgmt-port, fixed_ips, 0, ip_address]} + $devstack: {get_attr: [devstack-machine-port, fixed_ips, 0, ip_address]} + owner: root:dnsmasq + - path: /etc/resolv.conf + content: | + nameserver: 127.0.0.1 + owner: root:root + - path: /etc/NetworkManager/conf.d/98-rc-manager.conf + content: | + [main] + rc-manager=unmanaged + owner: root:root + + controller-runcmd: + type: OS::Heat::CloudConfig + properties: + cloud_config: + runcmd: + - ['setenforce', 'permissive'] + - ['systemctl', 'enable', 'dnsmasq.service'] + - ['systemctl', 'start', 'dnsmasq.service'] + + controller-init: + type: OS::Heat::MultipartMime + properties: + parts: + - config: {get_resource: controller_users} + - config: {get_resource: controller-write-files} + - config: {get_resource: controller-runcmd} + + controller-machine-port: + type: OS::Neutron::Port + properties: + network: {get_resource: machine-net} + mac_address: "fa:16:9e:81:f6:05" + fixed_ips: + - ip_address: 192.168.32.254 + + controller-floating-ip: + depends_on: machine-net-router-interface + type: OS::Neutron::FloatingIP + properties: + floating_network: {get_param: floating_ip_network} + port_id: {get_resource: controller-machine-port} + + controller: + type: OS::Nova::Server + properties: + image: {get_param: [controller_params, image]} + flavor: {get_param: [controller_params, flavor]} + networks: + - port: {get_resource: controller-machine-port} + user_data_format: RAW + user_data: {get_resource: controller-init} + + # + # Spine Switches + # + + # Spine01 Switch + spine01-init: + type: OS::Heat::CloudConfig + properties: + cloud_config: + hostname: spine01 + fqdn: spine01.stack.lab + users: + - default + - name: zuul + gecos: "Zuul user" + sudo: ALL=(ALL) NOPASSWD:ALL + ssh_authorized_keys: + - {get_param: controller_ssh_pub_key} + - {get_param: dataplane_ssh_pub_key} + write_files: + - path: /etc/hotstack-sonic/config + content: | + MGMT_INTERFACE=eth0 + SWITCH_INTERFACE_START=eth1 + SWITCH_INTERFACE_COUNT=4 + SWITCH_HOSTNAME=spine01 + SONIC_IMAGE=localhost/docker-sonic-vs:latest + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/config_db.json + content: {get_file: spine01-config_db.json} + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/frr.conf + content: {get_file: spine01-frr.conf} + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/authorized_keys + content: + str_replace: + template: | + $CONTROLLER_KEY + $DATAPLANE_KEY + params: + $CONTROLLER_KEY: {get_param: controller_ssh_pub_key} + $DATAPLANE_KEY: {get_param: dataplane_ssh_pub_key} + owner: root:root + permissions: '0644' + runcmd: + - systemctl start sonic.service + + spine01-machine-port: + type: OS::Neutron::Port + properties: + network: {get_resource: machine-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:01:01" + fixed_ips: + - ip_address: 192.168.32.11 + + spine01-switch-mgmt-port: + type: OS::Neutron::Port + properties: + network: {get_resource: machine-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:01:10" + fixed_ips: + - ip_address: 192.168.32.111 + + spine01-spine-link-port: + type: OS::Neutron::Port + properties: + network: {get_resource: spine-link-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:01:02" + + spine01-leaf01-port: + type: OS::Neutron::Port + properties: + network: {get_resource: leaf01-spine01-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:01:04" + + spine01-leaf02-port: + type: OS::Neutron::Port + properties: + network: {get_resource: leaf02-spine01-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:01:05" + + spine01: + type: OS::Nova::Server + properties: + image: {get_param: [switch_params, image]} + flavor: {get_param: [switch_params, flavor]} + config_drive: false + diskConfig: MANUAL + networks: + - port: {get_resource: spine01-machine-port} + - port: {get_resource: spine01-switch-mgmt-port} + - port: {get_resource: spine01-spine-link-port} + - port: {get_resource: spine01-leaf01-port} + - port: {get_resource: spine01-leaf02-port} + user_data_format: RAW + user_data: {get_resource: spine01-init} + + # Spine02 Switch + spine02-init: + type: OS::Heat::CloudConfig + properties: + cloud_config: + hostname: spine02 + fqdn: spine02.stack.lab + users: + - default + - name: zuul + gecos: "Zuul user" + sudo: ALL=(ALL) NOPASSWD:ALL + ssh_authorized_keys: + - {get_param: controller_ssh_pub_key} + - {get_param: dataplane_ssh_pub_key} + write_files: + - path: /etc/hotstack-sonic/config + content: | + MGMT_INTERFACE=eth0 + SWITCH_INTERFACE_START=eth1 + SWITCH_INTERFACE_COUNT=4 + SWITCH_HOSTNAME=spine02 + SONIC_IMAGE=localhost/docker-sonic-vs:latest + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/config_db.json + content: {get_file: spine02-config_db.json} + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/frr.conf + content: {get_file: spine02-frr.conf} + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/authorized_keys + content: + str_replace: + template: | + $CONTROLLER_KEY + $DATAPLANE_KEY + params: + $CONTROLLER_KEY: {get_param: controller_ssh_pub_key} + $DATAPLANE_KEY: {get_param: dataplane_ssh_pub_key} + owner: root:root + permissions: '0644' + runcmd: + - systemctl start sonic.service + + spine02-machine-port: + type: OS::Neutron::Port + properties: + network: {get_resource: machine-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:02:01" + fixed_ips: + - ip_address: 192.168.32.12 + + spine02-switch-mgmt-port: + type: OS::Neutron::Port + properties: + network: {get_resource: machine-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:02:10" + fixed_ips: + - ip_address: 192.168.32.112 + + spine02-spine-link-port: + type: OS::Neutron::Port + properties: + network: {get_resource: spine-link-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:02:02" + + spine02-leaf01-port: + type: OS::Neutron::Port + properties: + network: {get_resource: leaf01-spine02-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:02:04" + + spine02-leaf02-port: + type: OS::Neutron::Port + properties: + network: {get_resource: leaf02-spine02-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:02:05" + + spine02: + type: OS::Nova::Server + properties: + image: {get_param: [switch_params, image]} + flavor: {get_param: [switch_params, flavor]} + config_drive: false + diskConfig: MANUAL + networks: + - port: {get_resource: spine02-machine-port} + - port: {get_resource: spine02-switch-mgmt-port} + - port: {get_resource: spine02-spine-link-port} + - port: {get_resource: spine02-leaf01-port} + - port: {get_resource: spine02-leaf02-port} + user_data_format: RAW + user_data: {get_resource: spine02-init} + + # + # Leaf Switches + # + + # Leaf01 Switch + leaf01-init: + type: OS::Heat::CloudConfig + properties: + cloud_config: + hostname: leaf01 + fqdn: leaf01.stack.lab + users: + - default + - name: zuul + gecos: "Zuul user" + sudo: ALL=(ALL) NOPASSWD:ALL + ssh_authorized_keys: + - {get_param: controller_ssh_pub_key} + - {get_param: dataplane_ssh_pub_key} + write_files: + - path: /etc/hotstack-sonic/config + content: | + MGMT_INTERFACE=eth0 + SWITCH_INTERFACE_START=eth1 + SWITCH_INTERFACE_COUNT=5 + SWITCH_HOSTNAME=leaf01 + SONIC_IMAGE=localhost/docker-sonic-vs:latest + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/config_db.json + content: {get_file: leaf01-config_db.json} + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/frr.conf + content: {get_file: leaf01-frr.conf} + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/authorized_keys + content: + str_replace: + template: | + $CONTROLLER_KEY + $DATAPLANE_KEY + params: + $CONTROLLER_KEY: {get_param: controller_ssh_pub_key} + $DATAPLANE_KEY: {get_param: dataplane_ssh_pub_key} + owner: root:root + permissions: '0644' + runcmd: + - systemctl start sonic.service + + leaf01-machine-port: + type: OS::Neutron::Port + properties: + network: {get_resource: machine-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:03:01" + fixed_ips: + - ip_address: 192.168.32.13 + + leaf01-switch-mgmt-port: + type: OS::Neutron::Port + properties: + network: {get_resource: machine-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:03:10" + fixed_ips: + - ip_address: 192.168.32.113 + + leaf01-spine01-port: + type: OS::Neutron::Port + properties: + network: {get_resource: leaf01-spine01-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:03:02" + + leaf01-spine02-port: + type: OS::Neutron::Port + properties: + network: {get_resource: leaf01-spine02-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:03:04" + + leaf01-devstack-br-port: + type: OS::Neutron::Port + properties: + network: {get_resource: devstack-br-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:03:06" + + leaf01-trunk-parent-port: + type: OS::Neutron::Port + properties: + network: {get_resource: leaf01-trunk-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:03:05" + + leaf01-trunk-public-vlan100-port: + type: OS::Neutron::Port + properties: + network: {get_resource: public-vlan100} + port_security_enabled: false + mac_address: "22:57:f8:dd:03:07" + + leaf01-trunk-tenant-vlan103-port: + type: OS::Neutron::Port + properties: + network: {get_resource: tenant-vlan103} + port_security_enabled: false + mac_address: "22:57:f8:dd:03:08" + + leaf01-trunk-tenant-vlan104-port: + type: OS::Neutron::Port + properties: + network: {get_resource: tenant-vlan104} + port_security_enabled: false + mac_address: "22:57:f8:dd:03:09" + + leaf01-trunk-tenant-vlan105-port: + type: OS::Neutron::Port + properties: + network: {get_resource: tenant-vlan105} + port_security_enabled: false + mac_address: "22:57:f8:dd:03:0a" + + leaf01-trunk: + type: OS::Neutron::Trunk + properties: + port: {get_resource: leaf01-trunk-parent-port} + sub_ports: + - port: {get_resource: leaf01-trunk-public-vlan100-port} + segmentation_id: 100 + segmentation_type: vlan + - port: {get_resource: leaf01-trunk-tenant-vlan103-port} + segmentation_id: 103 + segmentation_type: vlan + - port: {get_resource: leaf01-trunk-tenant-vlan104-port} + segmentation_id: 104 + segmentation_type: vlan + - port: {get_resource: leaf01-trunk-tenant-vlan105-port} + segmentation_id: 105 + segmentation_type: vlan + + leaf02-ironic0-br-port: + type: OS::Neutron::Port + properties: + network: {get_resource: ironic0-br-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:04:05" + + leaf01: + type: OS::Nova::Server + depends_on: leaf01-trunk + properties: + image: {get_param: [switch_params, image]} + flavor: {get_param: [switch_params, flavor]} + config_drive: false + diskConfig: MANUAL + networks: + - port: {get_resource: leaf01-machine-port} + - port: {get_resource: leaf01-switch-mgmt-port} + - port: {get_resource: leaf01-spine01-port} + - port: {get_resource: leaf01-spine02-port} + - port: {get_attr: [leaf01-trunk, port_id]} + - port: {get_resource: leaf01-devstack-br-port} + user_data_format: RAW + user_data: {get_resource: leaf01-init} + + # Leaf02 Switch + leaf02-init: + type: OS::Heat::CloudConfig + properties: + cloud_config: + hostname: leaf02 + fqdn: leaf02.stack.lab + users: + - default + - name: zuul + gecos: "Zuul user" + sudo: ALL=(ALL) NOPASSWD:ALL + ssh_authorized_keys: + - {get_param: controller_ssh_pub_key} + - {get_param: dataplane_ssh_pub_key} + write_files: + - path: /etc/hotstack-sonic/config + content: | + MGMT_INTERFACE=eth0 + SWITCH_INTERFACE_START=eth1 + SWITCH_INTERFACE_COUNT=5 + SWITCH_HOSTNAME=leaf02 + SONIC_IMAGE=localhost/docker-sonic-vs:latest + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/config_db.json + content: {get_file: leaf02-config_db.json} + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/frr.conf + content: {get_file: leaf02-frr.conf} + owner: root:root + permissions: '0644' + - path: /etc/hotstack-sonic/authorized_keys + content: + str_replace: + template: | + $CONTROLLER_KEY + $DATAPLANE_KEY + params: + $CONTROLLER_KEY: {get_param: controller_ssh_pub_key} + $DATAPLANE_KEY: {get_param: dataplane_ssh_pub_key} + owner: root:root + permissions: '0644' + runcmd: + - systemctl start sonic.service + + leaf02-machine-port: + type: OS::Neutron::Port + properties: + network: {get_resource: machine-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:04:01" + fixed_ips: + - ip_address: 192.168.32.14 + + leaf02-switch-mgmt-port: + type: OS::Neutron::Port + properties: + network: {get_resource: machine-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:04:10" + fixed_ips: + - ip_address: 192.168.32.114 + + leaf02-spine01-port: + type: OS::Neutron::Port + properties: + network: {get_resource: leaf02-spine01-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:04:02" + + leaf02-spine02-port: + type: OS::Neutron::Port + properties: + network: {get_resource: leaf02-spine02-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:04:04" + + leaf02-ironic1-br-port: + type: OS::Neutron::Port + properties: + network: {get_resource: ironic1-br-net} + port_security_enabled: false + mac_address: "22:57:f8:dd:04:06" + + leaf02: + type: OS::Nova::Server + properties: + image: {get_param: [switch_params, image]} + flavor: {get_param: [switch_params, flavor]} + config_drive: false + diskConfig: MANUAL + networks: + - port: {get_resource: leaf02-machine-port} + - port: {get_resource: leaf02-switch-mgmt-port} + - port: {get_resource: leaf02-spine01-port} + - port: {get_resource: leaf02-spine02-port} + - port: {get_resource: leaf02-ironic0-br-port} + - port: {get_resource: leaf02-ironic1-br-port} + user_data_format: RAW + user_data: {get_resource: leaf02-init} + + # + # Devstack Instance + # + devstack_users: + type: OS::Heat::CloudConfig + properties: + cloud_config: + users: + - default + - name: stack + gecos: "Stack user" + sudo: ALL=(ALL) NOPASSWD:ALL + homedir: /opt/stack + shell: /bin/bash + ssh_authorized_keys: + - {get_param: controller_ssh_pub_key} + - {get_param: dataplane_ssh_pub_key} + + devstack-network-config: + type: OS::Heat::CloudConfig + properties: + cloud_config: + hostname: devstack + fqdn: devstack.stack.lab + + + devstack-write-files: + type: OS::Heat::CloudConfig + properties: + cloud_config: + write_files: + - path: /etc/hotstack/local.conf.j2 + content: + get_file: local.conf.j2 + owner: root:root + permissions: '0644' + - path: /etc/neutron/l2vni_network_nodes.yaml + content: + get_file: l2vni_network_nodes.yaml + owner: root:root + permissions: '0644' + + devstack-init: + type: OS::Heat::MultipartMime + properties: + parts: + - config: {get_resource: devstack_users} + - config: {get_resource: devstack-network-config} + - config: {get_resource: devstack-write-files} + + devstack-machine-port: + type: OS::Neutron::Port + properties: + network: {get_resource: machine-net} + port_security_enabled: false + mac_address: "fa:16:9e:81:f6:20" + fixed_ips: + - ip_address: 192.168.32.20 + + devstack-trunk-parent-port: + type: OS::Neutron::Port + properties: + network: {get_resource: devstack-br-net} + port_security_enabled: false + mac_address: "fa:16:9e:81:f6:21" + + devstack-public-vlan100-port: + type: OS::Neutron::Port + properties: + network: {get_resource: public-vlan100} + port_security_enabled: false + + devstack-tenant-vlan103-port: + type: OS::Neutron::Port + properties: + network: {get_resource: tenant-vlan103} + port_security_enabled: false + + devstack-tenant-vlan104-port: + type: OS::Neutron::Port + properties: + network: {get_resource: tenant-vlan104} + port_security_enabled: false + + devstack-tenant-vlan105-port: + type: OS::Neutron::Port + properties: + network: {get_resource: tenant-vlan105} + port_security_enabled: false + + devstack-trunk: + type: OS::Neutron::Trunk + properties: + port: {get_resource: devstack-trunk-parent-port} + sub_ports: + - port: {get_resource: devstack-public-vlan100-port} + segmentation_id: 100 + segmentation_type: vlan + - port: {get_resource: devstack-tenant-vlan103-port} + segmentation_id: 103 + segmentation_type: vlan + - port: {get_resource: devstack-tenant-vlan104-port} + segmentation_id: 104 + segmentation_type: vlan + - port: {get_resource: devstack-tenant-vlan105-port} + segmentation_id: 105 + segmentation_type: vlan + + devstack: + type: OS::Nova::Server + depends_on: devstack-trunk + properties: + image: {get_param: [devstack_params, image]} + flavor: {get_param: [devstack_params, flavor]} + networks: + - port: {get_resource: devstack-machine-port} + - port: {get_attr: [devstack-trunk, port_id]} + user_data_format: RAW + user_data: {get_resource: devstack-init} + + # + # Ironic Nodes + # + ironic0-port: + type: OS::Neutron::Port + properties: + network: {get_resource: ironic0-br-net} + port_security_enabled: false + + ironic0: + type: OS::Nova::Server + properties: + flavor: {get_param: [ironic_params, flavor]} + block_device_mapping_v2: + - device_type: disk + boot_index: 1 + image_id: {get_param: [ironic_params, image]} + volume_size: 40 + delete_on_termination: true + - device_type: cdrom + disk_bus: {get_param: cdrom_disk_bus} + boot_index: 0 + image_id: {get_param: [ironic_params, cd_image]} + volume_size: 5 + delete_on_termination: true + networks: + - port: {get_resource: ironic0-port} + + ironic1-port: + type: OS::Neutron::Port + properties: + network: {get_resource: ironic1-br-net} + port_security_enabled: false + + ironic1: + type: OS::Nova::Server + properties: + flavor: {get_param: [ironic_params, flavor]} + block_device_mapping_v2: + - device_type: disk + boot_index: 1 + image_id: {get_param: [ironic_params, image]} + volume_size: 40 + delete_on_termination: true + - device_type: cdrom + disk_bus: {get_param: cdrom_disk_bus} + boot_index: 0 + image_id: {get_param: [ironic_params, cd_image]} + volume_size: 5 + delete_on_termination: true + networks: + - port: {get_resource: ironic1-port} + +outputs: + controller_floating_ip: + description: Controller Floating IP + value: {get_attr: [controller-floating-ip, floating_ip_address]} + + controller_ansible_host: + description: > + Controller ansible host, this struct can be passed to the ansible.builtin.add_host module + value: + name: controller-0 + ansible_ssh_user: zuul + ansible_host: {get_attr: [controller-floating-ip, floating_ip_address]} + ansible_port: 22 + ansible_ssh_common_args: '-o StrictHostKeyChecking=no' + groups: controllers + + devstack_ansible_host: + description: > + Devstack ansible host, this struct can be passed to the ansible.builtin.add_host module. + Uses ProxyJump through the controller for SSH access. + value: + name: devstack + ansible_user: stack + ansible_host: {get_attr: [devstack-machine-port, fixed_ips, 0, ip_address]} + ansible_port: 22 + ansible_ssh_common_args: + str_replace: + template: '-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ProxyJump=zuul@$controller_ip' + params: + $controller_ip: {get_attr: [controller-floating-ip, floating_ip_address]} + ansible_ssh_private_key_file: '~/.ssh/id_rsa' + groups: devstack_nodes + + devstack_netplan_config: + description: > + Complete netplan configuration for devstack node to be written by Ansible + value: + network: + version: 2 + ethernets: + enp3s0: + match: + macaddress: "fa:16:9e:81:f6:20" + dhcp4: true + set-name: "enp3s0" + mtu: 1442 + trunk0: + match: + macaddress: "fa:16:9e:81:f6:21" + dhcp4: false + dhcp6: false + set-name: trunk0 + mtu: 1442 + + sushy_emulator_uuids: + description: UUIDs of instances to manage with sushy-tools - RedFish virtual BMC + value: + ironic0: {get_resource: ironic0} + ironic1: {get_resource: ironic1} + + sushy_tools_vmedia_type: + description: Virtual media implementation type for sushy-tools (rescue or volumeRebuild) + value: rescue + + ironic_nodes: + description: Ironic nodes YAML, used with openstack baremetal create to enroll nodes in Openstack Ironic + value: + nodes: + - name: ironic0 + driver: redfish + bios_interface: no-bios + boot_interface: redfish-virtual-media + network_interface: neutron + driver_info: + redfish_address: http://controller-0.stack.lab:8000 + redfish_system_id: + str_replace: + template: "/redfish/v1/Systems/$SYS_ID" + params: + $SYS_ID: {get_resource: ironic0} + redfish_username: admin + redfish_password: password + properties: + cpu_arch: x86_64 + cpus: 1 + memory_mb: 1024 + local_gb: 15 + capabilities: boot_mode:uefi + ports: + - address: {get_attr: [ironic0-port, mac_address]} + physical_network: public + local_link_connection: + switch_id: "22:57:f8:dd:04:10" + switch_info: leaf02 + port_id: "Ethernet8" + - name: ironic1 + driver: redfish + bios_interface: no-bios + boot_interface: redfish-virtual-media + network_interface: neutron + driver_info: + redfish_address: http://controller-0.stack.lab:8000 + redfish_system_id: + str_replace: + template: "/redfish/v1/Systems/$SYS_ID" + params: + $SYS_ID: {get_resource: ironic1} + redfish_username: admin + redfish_password: password + properties: + cpu_arch: x86_64 + cpus: 1 + memory_mb: 1024 + local_gb: 15 + capabilities: boot_mode:uefi + ports: + - address: {get_attr: [ironic1-port, mac_address]} + physical_network: public + local_link_connection: + switch_id: "22:57:f8:dd:04:10" + switch_info: leaf02 + port_id: "Ethernet12" + + ansible_inventory: + description: Ansible inventory + value: + all: + children: + controllers: + vars: + switches: + vars: + devstack_nodes: + vars: + localhosts: + hosts: + localhost: + ansible_connection: local + controllers: + hosts: + controller0: + ansible_host: {get_attr: [controller-machine-port, fixed_ips, 0, ip_address]} + ansible_user: zuul + ansible_ssh_common_args: '-o StrictHostKeyChecking=no' + ansible_ssh_private_key_file: '~/.ssh/id_rsa' + switches: + hosts: + spine01: + ansible_host: {get_attr: [spine01-machine-port, fixed_ips, 0, ip_address]} + ansible_user: admin + ansible_ssh_common_args: '-o StrictHostKeyChecking=no' + ansible_ssh_private_key_file: '~/.ssh/id_rsa' + spine02: + ansible_host: {get_attr: [spine02-machine-port, fixed_ips, 0, ip_address]} + ansible_user: admin + ansible_ssh_common_args: '-o StrictHostKeyChecking=no' + ansible_ssh_private_key_file: '~/.ssh/id_rsa' + leaf01: + ansible_host: {get_attr: [leaf01-machine-port, fixed_ips, 0, ip_address]} + ansible_user: admin + ansible_ssh_common_args: '-o StrictHostKeyChecking=no' + ansible_ssh_private_key_file: '~/.ssh/id_rsa' + leaf02: + ansible_host: {get_attr: [leaf02-machine-port, fixed_ips, 0, ip_address]} + ansible_user: admin + ansible_ssh_common_args: '-o StrictHostKeyChecking=no' + ansible_ssh_private_key_file: '~/.ssh/id_rsa' + devstack_nodes: + hosts: + devstack: + ansible_host: {get_attr: [devstack-machine-port, fixed_ips, 0, ip_address]} + ansible_user: stack + ansible_ssh_common_args: '-o StrictHostKeyChecking=no' + ansible_ssh_private_key_file: '~/.ssh/id_rsa' diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/l2vni_network_nodes.yaml b/scenarios/networking-lab/devstack-sonic-vxlan/l2vni_network_nodes.yaml new file mode 100644 index 00000000..78e2ee92 --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/l2vni_network_nodes.yaml @@ -0,0 +1,19 @@ +--- +# L2VNI Network Nodes Configuration +# This file provides fallback local_link_connection information for network nodes +# when LLDP data is not available from OVN and Ironic. +# +# The system_id must match the OVN chassis system-id (from external-ids:system-id in OVS). +# For this devstack deployment, the system-id is the hostname: devstack + +network_nodes: + - hostname: "devstack" + trunks: + - physical_network: public + local_link_information: + - switch_id: "22:57:f8:dd:03:10" + switch_info: "leaf01" + port_id: "Ethernet8" + - switch_id: "22:57:f8:dd:03:10" + switch_info: "leaf01" + port_id: "Ethernet12" diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/leaf01-config_db.json b/scenarios/networking-lab/devstack-sonic-vxlan/leaf01-config_db.json new file mode 100644 index 00000000..a8024956 --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/leaf01-config_db.json @@ -0,0 +1,76 @@ +{ + "DEVICE_METADATA": { + "localhost": { + "hostname": "leaf01", + "type": "ToRRouter", + "bgp_asn": "65001", + "docker_routing_config_mode": "unified" + } + }, + "MGMT_INTERFACE": { + "eth0|192.168.32.113/24": { + "gwaddr": "192.168.32.1" + } + }, + "MGMT_PORT": { + "eth0": { + "alias": "eth0", + "admin_status": "up" + } + }, + "MGMT_VRF_CONFIG": { + "vrf_global": { + "mgmtVrfEnabled": "true" + } + }, + "LOOPBACK_INTERFACE": { + "Loopback0|10.255.255.3/32": {} + }, + "INTERFACE": { + "Ethernet0": { + "mtu": "1442" + }, + "Ethernet0|10.1.1.6/30": {}, + "Ethernet4": { + "mtu": "1442" + }, + "Ethernet4|10.1.1.10/30": {} + }, + "PORT": { + "Ethernet0": { + "alias": "fortyGigE0/0", + "lanes": "25,26,27,28", + "speed": "40000", + "admin_status": "up" + }, + "Ethernet4": { + "alias": "fortyGigE0/4", + "lanes": "29,30,31,32", + "speed": "40000", + "admin_status": "up" + }, + "Ethernet8": { + "alias": "fortyGigE0/8", + "lanes": "33,34,35,36", + "speed": "40000", + "admin_status": "up", + "mtu": "1442" + }, + "Ethernet12": { + "alias": "fortyGigE0/12", + "lanes": "37,38,39,40", + "speed": "40000", + "admin_status": "up", + "mtu": "1442" + } + }, + "VLAN": {}, + "VLAN_INTERFACE": {}, + "VLAN_MEMBER": {}, + "VXLAN_TUNNEL": { + "vtep": { + "src_ip": "10.255.255.3" + } + }, + "VXLAN_TUNNEL_MAP": {} +} diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/leaf01-frr.conf b/scenarios/networking-lab/devstack-sonic-vxlan/leaf01-frr.conf new file mode 100644 index 00000000..557d2924 --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/leaf01-frr.conf @@ -0,0 +1,28 @@ +! +hostname leaf01 +! +router ospf + ospf router-id 10.255.255.3 + network 10.1.1.4/30 area 0.0.0.0 + network 10.1.1.8/30 area 0.0.0.0 + network 10.255.255.3/32 area 0.0.0.0 +! +router bgp 65001 + bgp router-id 10.255.255.3 + bgp log-neighbor-changes + no bgp ebgp-requires-policy + no bgp default ipv4-unicast + neighbor 10.255.255.1 remote-as 65001 + neighbor 10.255.255.1 description spine01 + neighbor 10.255.255.1 update-source Loopback0 + neighbor 10.255.255.2 remote-as 65001 + neighbor 10.255.255.2 description spine02 + neighbor 10.255.255.2 update-source Loopback0 + ! + address-family l2vpn evpn + neighbor 10.255.255.1 activate + neighbor 10.255.255.2 activate + advertise-all-vni + exit-address-family +! +end diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/leaf02-config_db.json b/scenarios/networking-lab/devstack-sonic-vxlan/leaf02-config_db.json new file mode 100644 index 00000000..626d9d3f --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/leaf02-config_db.json @@ -0,0 +1,76 @@ +{ + "DEVICE_METADATA": { + "localhost": { + "hostname": "leaf02", + "type": "ToRRouter", + "bgp_asn": "65001", + "docker_routing_config_mode": "unified" + } + }, + "MGMT_INTERFACE": { + "eth0|192.168.32.114/24": { + "gwaddr": "192.168.32.1" + } + }, + "MGMT_PORT": { + "eth0": { + "alias": "eth0", + "admin_status": "up" + } + }, + "MGMT_VRF_CONFIG": { + "vrf_global": { + "mgmtVrfEnabled": "true" + } + }, + "LOOPBACK_INTERFACE": { + "Loopback0|10.255.255.4/32": {} + }, + "INTERFACE": { + "Ethernet0": { + "mtu": "1442" + }, + "Ethernet0|10.1.1.14/30": {}, + "Ethernet4": { + "mtu": "1442" + }, + "Ethernet4|10.1.1.18/30": {} + }, + "PORT": { + "Ethernet0": { + "alias": "fortyGigE0/0", + "lanes": "25,26,27,28", + "speed": "40000", + "admin_status": "up" + }, + "Ethernet4": { + "alias": "fortyGigE0/4", + "lanes": "29,30,31,32", + "speed": "40000", + "admin_status": "up" + }, + "Ethernet8": { + "alias": "fortyGigE0/8", + "lanes": "33,34,35,36", + "speed": "40000", + "admin_status": "down", + "mtu": "1442" + }, + "Ethernet12": { + "alias": "fortyGigE0/12", + "lanes": "37,38,39,40", + "speed": "40000", + "admin_status": "down", + "mtu": "1442" + } + }, + "VLAN": {}, + "VLAN_INTERFACE": {}, + "VLAN_MEMBER": {}, + "VXLAN_TUNNEL": { + "vtep": { + "src_ip": "10.255.255.4" + } + }, + "VXLAN_TUNNEL_MAP": {} +} diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/leaf02-frr.conf b/scenarios/networking-lab/devstack-sonic-vxlan/leaf02-frr.conf new file mode 100644 index 00000000..70857a6f --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/leaf02-frr.conf @@ -0,0 +1,28 @@ +! +hostname leaf02 +! +router ospf + ospf router-id 10.255.255.4 + network 10.1.1.12/30 area 0.0.0.0 + network 10.1.1.16/30 area 0.0.0.0 + network 10.255.255.4/32 area 0.0.0.0 +! +router bgp 65001 + bgp router-id 10.255.255.4 + bgp log-neighbor-changes + no bgp ebgp-requires-policy + no bgp default ipv4-unicast + neighbor 10.255.255.1 remote-as 65001 + neighbor 10.255.255.1 description spine01 + neighbor 10.255.255.1 update-source Loopback0 + neighbor 10.255.255.2 remote-as 65001 + neighbor 10.255.255.2 description spine02 + neighbor 10.255.255.2 update-source Loopback0 + ! + address-family l2vpn evpn + neighbor 10.255.255.1 activate + neighbor 10.255.255.2 activate + advertise-all-vni + exit-address-family +! +end diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/local.conf.j2 b/scenarios/networking-lab/devstack-sonic-vxlan/local.conf.j2 new file mode 100644 index 00000000..ec784697 --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/local.conf.j2 @@ -0,0 +1,162 @@ +[[local|localrc]] +# Credentials +ADMIN_PASSWORD=secret +DATABASE_PASSWORD=$ADMIN_PASSWORD +RABBIT_PASSWORD=$ADMIN_PASSWORD +SERVICE_PASSWORD=$ADMIN_PASSWORD + +# Service timeouts +SERVICE_TIMEOUT=120 + +# MTU - running inside an encapsulated environment, restrict to 1442 on the +# physical network so VXLAN tenant networks get 1442 - 50 = 1392 effective MTU. +PUBLIC_BRIDGE_MTU=1442 + +# Networking +HOST_IP=192.168.32.20 +SERVICE_HOST=$HOST_IP +MYSQL_HOST=$HOST_IP +RABBIT_HOST=$HOST_IP +GLANCE_HOSTPORT=$HOST_IP:9292 + +# Network ranges (avoiding Heat template allocations) +FIXED_RANGE=172.20.100.0/24 +IPV4_ADDRS_SAFE_TO_USE=172.20.100.0/24 +FLOATING_RANGE=172.20.200.0/24 +PUBLIC_NETWORK_GATEWAY=172.20.200.1 + +# Disable IPv6 - use IPv4 only +IP_VERSION=4 + +# Enable Neutron with OVN +NEUTRON_BRANCH=refs/changes/37/980637/1 +disable_service n-net +enable_service q-svc +# Disable traditional neutron agents +disable_service q-agt +disable_service q-dhcp +disable_service q-l3 +disable_service q-meta +# Enable OVN services +enable_service ovn-northd +enable_service ovn-controller +enable_service q-ovn-metadata-agent +# Enable Neutron trunk service +enable_service neutron-trunk +# Enable Neutron segments service +enable_service neutron-segments + +enable_service ir-api +enable_service ir-cond +enable_service ir-neutronagt + +# Ironic configuration +VIRT_DRIVER=ironic +DEFAULT_INSTANCE_TYPE=baremetal +IRONIC_BAREMETAL_BASIC_OPS=True +IRONIC_IS_HARDWARE=True +IRONIC_VM_COUNT=0 +IRONIC_NETWORK_SIMULATOR=none +IRONIC_BUILD_DEPLOY_RAMDISK=False +IRONIC_DEPLOY_DRIVER=redfish +IRONIC_ENABLED_HARDWARE_TYPES=redfish +IRONIC_ENABLED_BOOT_INTERFACES=ipxe,redfish-virtual-media,http-ipxe +IRONIC_ENABLED_POWER_INTERFACES=redfish +IRONIC_ENABLED_MANAGEMENT_INTERFACES=redfish +IRONIC_ENABLED_DEPLOY_INTERFACES=direct,ramdisk +IRONIC_NETWORK_INTERFACE=neutron +IRONIC_ENABLED_NETWORK_INTERFACES=neutron +IRONIC_AUTOMATED_CLEAN_ENABLED=True +FORCE_CONFIG_DRIVE=True + +# Ironic network configuration - use provisioning vxlan network for all operations +IRONIC_PROVISION_NETWORK_NAME=provisioning +IRONIC_PROVISION_PROVIDER_NETWORK_TYPE=vxlan +IRONIC_PROVISION_SUBNET_PREFIX=10.0.5.0/24 +IRONIC_PROVISION_SUBNET_GATEWAY=10.0.5.1 +IRONIC_CLEAN_NET_NAME=provisioning +IRONIC_RESCUE_NET_NAME=provisioning +IRONIC_INSPECTION_NET_NAME=provisioning + +# Networking configuration for ML2 with OVN and Generic Switch +Q_PLUGIN=ml2 +Q_ML2_TENANT_NETWORK_TYPE=vxlan +Q_ML2_PLUGIN_MECHANISM_DRIVERS=ovn,baremetal-l2vni,genericswitch,baremetal +Q_ML2_PLUGIN_TYPE_DRIVERS=vxlan,geneve,vlan,flat +ENABLE_TENANT_VLANS=True +TENANT_VLAN_RANGE=103:105 +PHYSICAL_NETWORK=public + +# Physical interface mapping +# The second interface (trunk port) will be added to br-ex +# trunk0 is matched by MAC address fa:16:9e:81:f6:21 and renamed by netplan +PUBLIC_INTERFACE=trunk0 +OVS_PHYSICAL_BRIDGE=br-ex +PUBLIC_BRIDGE=br-ex + +# OVN Configuration +Q_USE_PROVIDERNET_FOR_PUBLIC=True +OVN_L3_CREATE_PUBLIC_NETWORK=True +OVN_BRIDGE_MAPPINGS=public:br-ex + +# Enable Ironic +enable_plugin ironic https://opendev.org/openstack/ironic + +# Enable networking-generic-switch plugin +enable_plugin networking-generic-switch https://opendev.org/openstack/networking-generic-switch + +# Enable networking-baremetal plugin +enable_plugin networking-baremetal https://opendev.org/openstack/networking-baremetal + + + +# Disable Swift (optional, not needed for this setup) +disable_service s-proxy s-object s-container s-account + +# Disable Horizon dashboard +disable_service horizon + +[[post-config|$NEUTRON_CONF]] +[DEFAULT] +global_physnet_mtu = 1442 + +[baremetal_agent] +enable_ha_chassis_group_alignment = False +enable_router_ha_binding_events = True + +[baremetal_l2vni] +# Set to False for pure EVPN deployments where switches handle VXLAN via BGP +# Set to True if you need OVN localnet ports for overlay-to-physical bridging +create_localnet_ports = True +default_physical_network = public + +[l2vni] +# Enable L2VNI trunk reconciliation for network nodes +enable_l2vni_trunk_reconciliation = True +l2vni_reconciliation_interval = 300 +l2vni_auto_create_networks = True +l2vni_subport_anchor_network = l2vni-subport-anchor +l2vni_subport_anchor_network_type = vxlan +l2vni_startup_jitter_max = 60 +l2vni_network_nodes_config = /etc/neutron/l2vni_network_nodes.yaml + +[[post-config|/etc/neutron/plugins/ml2/ml2_conf_genericswitch.ini]] +[genericswitch:leaf01] +device_type = netmiko_sonic +ip = leaf01.stack.lab +username = admin +password = password +secret = password +ngs_disable_inactive_ports = true +ngs_physical_networks = public +ngs_nve_interface = vtep + +[genericswitch:leaf02] +device_type = netmiko_sonic +ip = leaf02.stack.lab +username = admin +password = password +secret = password +ngs_disable_inactive_ports = true +ngs_physical_networks = public +ngs_nve_interface = vtep diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/spine01-config_db.json b/scenarios/networking-lab/devstack-sonic-vxlan/spine01-config_db.json new file mode 100644 index 00000000..f99582d7 --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/spine01-config_db.json @@ -0,0 +1,68 @@ +{ + "DEVICE_METADATA": { + "localhost": { + "hostname": "spine01", + "type": "SpinRouter", + "bgp_asn": "65001", + "docker_routing_config_mode": "unified" + } + }, + "MGMT_INTERFACE": { + "eth0|192.168.32.111/24": { + "gwaddr": "192.168.32.1" + } + }, + "MGMT_PORT": { + "eth0": { + "alias": "eth0", + "admin_status": "up" + } + }, + "MGMT_VRF_CONFIG": { + "vrf_global": { + "mgmtVrfEnabled": "true" + } + }, + "LOOPBACK_INTERFACE": { + "Loopback0|10.255.255.1/32": {} + }, + "INTERFACE": { + "Ethernet0": { + "mtu": "1442" + }, + "Ethernet0|10.1.1.1/30": {}, + "Ethernet4": { + "mtu": "1442" + }, + "Ethernet4|10.1.1.5/30": {}, + "Ethernet8": { + "mtu": "1442" + }, + "Ethernet8|10.1.1.13/30": {} + }, + "PORT": { + "Ethernet0": { + "alias": "fortyGigE0/0", + "lanes": "25,26,27,28", + "speed": "40000", + "admin_status": "up" + }, + "Ethernet4": { + "alias": "fortyGigE0/4", + "lanes": "29,30,31,32", + "speed": "40000", + "admin_status": "up" + }, + "Ethernet8": { + "alias": "fortyGigE0/8", + "lanes": "33,34,35,36", + "speed": "40000", + "admin_status": "up" + } + }, + "VXLAN_TUNNEL": { + "vtep": { + "src_ip": "10.255.255.1" + } + } +} diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/spine01-frr.conf b/scenarios/networking-lab/devstack-sonic-vxlan/spine01-frr.conf new file mode 100644 index 00000000..bc1c992e --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/spine01-frr.conf @@ -0,0 +1,30 @@ +! +hostname spine01 +! +router ospf + ospf router-id 10.255.255.1 + network 10.1.1.0/30 area 0.0.0.0 + network 10.1.1.4/30 area 0.0.0.0 + network 10.1.1.12/30 area 0.0.0.0 + network 10.255.255.1/32 area 0.0.0.0 +! +router bgp 65001 + bgp router-id 10.255.255.1 + bgp log-neighbor-changes + no bgp ebgp-requires-policy + no bgp default ipv4-unicast + neighbor 10.255.255.3 remote-as 65001 + neighbor 10.255.255.3 description leaf01 + neighbor 10.255.255.3 update-source Loopback0 + neighbor 10.255.255.4 remote-as 65001 + neighbor 10.255.255.4 description leaf02 + neighbor 10.255.255.4 update-source Loopback0 + ! + address-family l2vpn evpn + neighbor 10.255.255.3 activate + neighbor 10.255.255.3 route-reflector-client + neighbor 10.255.255.4 activate + neighbor 10.255.255.4 route-reflector-client + exit-address-family +! +end diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/spine02-config_db.json b/scenarios/networking-lab/devstack-sonic-vxlan/spine02-config_db.json new file mode 100644 index 00000000..0592441c --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/spine02-config_db.json @@ -0,0 +1,68 @@ +{ + "DEVICE_METADATA": { + "localhost": { + "hostname": "spine02", + "type": "SpinRouter", + "bgp_asn": "65001", + "docker_routing_config_mode": "unified" + } + }, + "MGMT_INTERFACE": { + "eth0|192.168.32.112/24": { + "gwaddr": "192.168.32.1" + } + }, + "MGMT_PORT": { + "eth0": { + "alias": "eth0", + "admin_status": "up" + } + }, + "MGMT_VRF_CONFIG": { + "vrf_global": { + "mgmtVrfEnabled": "true" + } + }, + "LOOPBACK_INTERFACE": { + "Loopback0|10.255.255.2/32": {} + }, + "INTERFACE": { + "Ethernet0": { + "mtu": "1442" + }, + "Ethernet0|10.1.1.2/30": {}, + "Ethernet4": { + "mtu": "1442" + }, + "Ethernet4|10.1.1.9/30": {}, + "Ethernet8": { + "mtu": "1442" + }, + "Ethernet8|10.1.1.17/30": {} + }, + "PORT": { + "Ethernet0": { + "alias": "fortyGigE0/0", + "lanes": "25,26,27,28", + "speed": "40000", + "admin_status": "up" + }, + "Ethernet4": { + "alias": "fortyGigE0/4", + "lanes": "29,30,31,32", + "speed": "40000", + "admin_status": "up" + }, + "Ethernet8": { + "alias": "fortyGigE0/8", + "lanes": "33,34,35,36", + "speed": "40000", + "admin_status": "up" + } + }, + "VXLAN_TUNNEL": { + "vtep": { + "src_ip": "10.255.255.2" + } + } +} diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/spine02-frr.conf b/scenarios/networking-lab/devstack-sonic-vxlan/spine02-frr.conf new file mode 100644 index 00000000..df941d73 --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/spine02-frr.conf @@ -0,0 +1,30 @@ +! +hostname spine02 +! +router ospf + ospf router-id 10.255.255.2 + network 10.1.1.0/30 area 0.0.0.0 + network 10.1.1.8/30 area 0.0.0.0 + network 10.1.1.16/30 area 0.0.0.0 + network 10.255.255.2/32 area 0.0.0.0 +! +router bgp 65001 + bgp router-id 10.255.255.2 + bgp log-neighbor-changes + no bgp ebgp-requires-policy + no bgp default ipv4-unicast + neighbor 10.255.255.3 remote-as 65001 + neighbor 10.255.255.3 description leaf01 + neighbor 10.255.255.3 update-source Loopback0 + neighbor 10.255.255.4 remote-as 65001 + neighbor 10.255.255.4 description leaf02 + neighbor 10.255.255.4 update-source Loopback0 + ! + address-family l2vpn evpn + neighbor 10.255.255.3 activate + neighbor 10.255.255.3 route-reflector-client + neighbor 10.255.255.4 activate + neighbor 10.255.255.4 route-reflector-client + exit-address-family +! +end diff --git a/scenarios/networking-lab/devstack-sonic-vxlan/topology-diagram.svg b/scenarios/networking-lab/devstack-sonic-vxlan/topology-diagram.svg new file mode 100644 index 00000000..669afc51 --- /dev/null +++ b/scenarios/networking-lab/devstack-sonic-vxlan/topology-diagram.svg @@ -0,0 +1,132 @@ + + + + + + + + + Spine-and-Leaf Topology (BGP AS 65001) + + + + + Management Network (192.168.32.0/24) + + + + controller + 192.168.32.254 + + + + + spine01 + (RR) + 10.255.255.1 + + + + spine02 + (RR) + 10.255.255.2 + + + + 10.1.1.0/30 + + + + + leaf01 + (RRC, vtep) + 10.255.255.3 + + + + leaf02 + (RRC, vtep) + 10.255.255.4 + + + + 10.1.1.4/30 + + + + 10.1.1.12/30 + + + + 10.1.1.8/30 + + + + 10.1.1.16/30 + + + + + devstack + 192.168.32.20 + + + + ironic0 + BM Node + + + + ironic1 + BM Node + + + + + Eth12 + + + + Eth8 + + + + Eth12 + + + + Legend + + Switch + + Server + + Controller + + P2P Link + + Management + + BGP EVPN + + + + + BGP EVPN: AS 65001 iBGP | RR=Route Reflector | RRC=Route Reflector Client | vtep=VXLAN VTEP + + + Leaf01: Ethernet8=Trunk, Ethernet12=Devstack | Leaf02: Ethernet8=Ironic0, Ethernet12=Ironic1 + +