This project demonstrates the design and implementation of a secure, segmented enterprise network infrastructure using pfSense and CentOS/Enterprise Linux. The primary focus was to simulate a real-world environment requiring strict traffic filtering, proxy management, high-availability web services, and automated disaster recovery.
Key Competencies Demonstrated:
- Network Security: Deep packet inspection, firewall rule creation, and DMZ segmentation.
- Traffic Control: Transparent proxying (Squid), content filtering (SquidGuard), and bandwidth shaping.
- High Availability: Weighted Round-Robin Load Balancing using Nginx.
- Automation: Bash scripting for incremental backups with rotation logic.
The architecture consists of 7 Virtual Machines operating across 4 distinct network zones (WAN, LAN, LAN2, DMZ) to simulate a segmented, secure enterprise infrastructure.
| Node Role | OS | IP Address | Service Function |
|---|---|---|---|
| Perimeter Firewall | pfSense (FreeBSD) | 10.0.0.254 (LAN)192.168.3.254 (LAN2)172.16.0.1 (DMZ) |
NAT Gateway, DPI Firewall, Squid Proxy, Traffic Shaping, VLAN Routing |
| Load Balancer | CentOS Stream 9 | 10.0.0.10 |
Nginx Reverse Proxy, Weighted Round-Robin Distribution |
| Web Server A | CentOS Stream 9 | 10.0.0.6 |
Apache HTTPD, High-Capacity Backend (Weight: 5) |
| Web Server B | CentOS Stream 9 | 10.0.0.8 |
Apache HTTPD, Med-Capacity Backend (Weight: 3) |
| Web Server C | CentOS Stream 9 | 10.0.0.9 |
Apache HTTPD, Low-Capacity Backend (Weight: 2) |
| Admin Client | CentOS Stream 9 | 192.168.3.6 |
Traffic Generation, Automated Rsync Backups, Cron Scheduling |
| DMZ Host | CentOS Stream 9 | 172.16.0.100 |
Isolated Testing, SSH Target |
The following diagram illustrates the logical traffic flow and segmentation enforced by the pfSense firewall.
The security core of this infrastructure is built on pfSense Community Edition, operating as a stateful firewall and unified threat management (UTM) gateway. The system was architected to enforce strict network segmentation, application-layer filtering, and granular access control across four isolated zones.
To simulate a physical enterprise environment within VMware Workstation, four distinct Host-Only networks (VMnets) were created. This ensures complete isolation between zones, forcing all inter-VLAN traffic to pass through the pfSense firewall for inspection.
-
Custom VMnets: Created specific virtual switches (VMnet11, VMnet12, VMnet13) with local DHCP disabled to prevent rogue addressing.
-
Adapter Binding: The pfSense VM was configured with 5 network adapters to bridge these virtual switches.
-
Verification:
- Evidence 1: πΈ Virtual Network Editor Configuration
- Evidence 2: πΈ VM Hardware Settings
The firewall interfaces were assigned static IPv4 addresses to act as the default gateway for each subnet. The WAN interface utilizes NAT to provide upstream internet access to internal clients.
| Interface | Physical Port | Zone Name | Subnet | Role |
|---|---|---|---|---|
| WAN | le0 |
WAN |
DHCP | Upstream Internet / NAT |
| LAN | le1 |
LAN |
10.0.0.0/24 |
Server Farm: High-security zone for web servers. |
| OPT1 | le2 |
LAN2 |
192.168.3.0/24 |
Client Zone: Restricted network for end-users. |
| OPT2 | le3 |
DMZ |
172.16.0.0/24 |
Perimeter: Isolated zone for exposed services. |
- Verification:
- Evidence 1: πΈ pfSense Console Interface Assignment
- Evidence 2: πΈ pfSense Dashboard Status
A Zero Trust policy was applied, creating specific "Allow" rules only for necessary business traffic. All other traffic is implicitly denied.
The Server Farm is protected from unauthorized access.
- Egress: Allowed outbound HTTP/HTTPS to the internet (for package updates).
- Ingress Protection: A specific block rule was implemented to deny traffic originating from the
LAN2client subnet, preventing lateral movement attacks from compromised workstations. - Verification:
- Evidence 1: πΈ LAN Firewall Rules
The Client network is highly restricted to prevent data exfiltration and limit attack surface.
- Web Traffic: HTTP (80), HTTPS (443), and FTP (21) are permitted to the WAN.
- Administration: SSH (Port 22) is strictly permitted only to the DMZ subnet for management tasks.
- Isolation: A strict rule blocks all traffic destined for the
LANsubnet (Server Farm). - Verification:
- Evidence 1: πΈ LAN2 Firewall Rules
The DMZ is designed as a semi-trusted buffer zone.
- Outbound: Unrestricted outbound access is permitted to simulate a public-facing network segment.
- Inbound: Accepts administrative SSH connections originating from the trusted
LAN2network. - Verification:
- Evidence 1: πΈ DMZ Firewall Rules
To enforce Acceptable Use Policies (AUP) and monitor user behavior, a transparent proxy stack was deployed.
Installed Packages:
- Squid: Caching proxy server.
- SquidGuard: URL redirector and content filter.
- LightSquid: Log analyzer and reporting tool.
- Verification:
- Evidence 1: πΈ Package Manager
The proxy was configured to intercept HTTP traffic transparently. SquidGuard was utilized to define Access Control Lists (ACLs).
- Policy:
BadSitescategory set to Deny. - Default Access: Allowed.
- Outcome: Users attempting to access forbidden categories are blocked immediately.
- Verification:
- Evidence 1: πΈ SquidGuard Common ACL Settings
LightSquid was configured to parse Squid access logs and generate detailed daily reports. This provides visibility into:
- Top visited websites.
- Bandwidth usage per user/IP.
- Blocked access attempts.
- Verification:
- Evidence 1: πΈ LightSquid User Access Report
To prevent network congestion, Quality of Service (QoS) policies were implemented using pfSense Limiters.
- Objective: Restrict bandwidth usage for the
LAN2Client network. - Configuration:
- Download Limiter: Capped at 160 Kbit/s.
- Upload Limiter: Capped at 160 Kbit/s.
- Implementation: These limiters were attached to the LAN2 firewall pass rules via the
In/Out Pipeadvanced settings. - Verification: Validated using
wgetto confirm download speeds stabilized at ~20KB/s (approx. 160Kbps).- Evidence 1: πΈ Traffic Shaper (Limiter) Configuration**
To ensure service reliability and optimized resource utilization, a Layer 7 Load Balancer was deployed using Nginx. This component sits in the trusted LAN zone, acting as the single entry point for all internal web traffic, abstracting the complexity of the backend server farm from the client.
Three distinct backend servers (Server A, Server B, Server C) were provisioned with Apache Web Server. To simulate a heterogeneous environment with varying hardware capabilities, custom index pages were deployed to visually identify which node processed the request during testing.
- Server A: High Capacity Node
- Server B: Medium Capacity Node
- Server C: Low Capacity Node
- Verification:
- Evidence 1: πΈ Backend Server Status Verification
The Nginx load balancer was configured with a Weighted Round-Robin strategy. This algorithm was chosen to distribute traffic proportionally based on the compute capacity of each backend node, rather than a simple equal-distribution model.
Configuration Logic (nginx.conf):
An upstream block was defined with specific integer weights corresponding to the target traffic distribution ratios:
- Server A (
10.0.0.6): Weight = 5 (Handles ~50% of traffic) - Server B (
10.0.0.8): Weight = 3 (Handles ~30% of traffic) - Server C (
10.0.0.9): Weight = 2 (Handles ~20% of traffic)
The reverse proxy listens on Port 80, and forwards requests via proxy_pass http://backend_servers;.
- Verification:
- Evidence 1: πΈ Nginx Upstream Configuration
To validate the load balancing logic, a stress test was executed from the Client Machine (LAN2 Zone). A bash loop was used to generate 10 sequential HTTP requests to the Load Balancer's Virtual IP (10.0.0.10).
Test Command:
for i in {1..10}; do curl [http://10.0.0.10](http://10.0.0.10); echo; done- Verification:
- Evidence 1: πΈ Round-Robin Response Test
To ensure business continuity and data resilience, a custom Bash automation suite was developed. This system performs daily, incremental backups of critical data from the LAN Server Farm (Server A) to a secure archive on the LAN2 Admin Client, utilizing SSH keys for secure, passwordless authentication.
A robust shell script (backup_script.sh) was engineered to handle the backup lifecycle. The script implements a "Rotation-First" strategy to preserve historical data versions before synchronizing new changes.
Key Script Functions:
- Version Control (Rotation): Before starting a new backup, the script detects if a
backupdirectory already exists. It renames this existing folder with a precise timestamp (e.g.,backup_2026-01-07_21-36-42), effectively creating a restore point. - Incremental Synchronization: Uses
rsyncwith the-avzflags (Archive mode, Verbose, Compression) to securely transfer data fromServer A:/data/to the local machine. - Retention Policy: A
findcommand automatically scans for backup directories older than 7 days and purges them to optimize storage usage. - Error Handling: Includes exit codes (
$?) to verify the success of the transfer and log the outcome.
- Verification:
- Evidence 1: πΈ Backup Script Source Code
To eliminate manual intervention, the backup process was automated using the Cron daemon. The job is scheduled to execute daily during off-peak hours to minimize network impact.
Schedule Configuration:
- Time:
16:30(4:30 PM) daily. - Command:
/root/backup_script.sh - Logging:
Standard OutputandStandard Errorare redirected to/var/log/backup.logfor audit trails (>> /var/log/backup.log 2>&1). - Verification:
- Evidence 1: πΈ Crontab Schedule
The disaster recovery workflow was validated by manually triggering the script and inspecting the filesystem structure.
Validation Results:
- Successful Transfer: Critical files (
file1.txt,file2.txt,important.doc) were successfully replicated from Server A. - Directory Rotation: The filesystem shows distinct directories for the current backup (
backup) and the previous archive (backup_2026-01-07...), confirming the rotation logic is active. - Verification:
- Evidence 1: πΈ Backup Verification & Directory Structure
This repository is organized by service role. Click the file names below to view the sanitized configuration code used in this project.
| Component | File Name | Description |
|---|---|---|
| Load Balancer | nginx.conf | Nginx configuration defining the upstream block and Weighted Round-Robin logic (50/30/20 split). |
| Automation | automated_backup.sh | Bash script executing incremental rsync transfers with directory rotation and 7-day retention policy. |
Secure-Enterprise-Network-Architecture/
β
βββ README.md # The main documentation file (Copy content below)
β
βββ scripts/ # Folder for your automation scripts
β βββ automated_backup.sh
β
βββ configs/ # Folder for configuration files
β βββ nginx.conf
β
βββ docs/ # Folder for all your evidence screenshots
β βββ 01_vmware_network.png
β βββ 02_pfsense_interfaces.png
β βββ 03_firewall_rules_lan2.png
β βββ 04_squid_proxy_report.png
β βββ 05_bandwidth_limiter.png
β βββ 06_nginx_wrr_test.png
β βββ ...
β
β
βββ diagrams/ # Folder for topology diagrams
βββ network_topology.png
