Ensure Access & Identity in Google Cloud: Challenge Lab (2022 Updated)
We are going to do it in 5 Steps so let's get Started.
Pls Note : Before we start keep this in mind to change [Project ID] with your Project ID of the assigned lab ok & also change red color text with the lab given data...
Step 1 :
nano role-definition.yaml
title: "orca_something_XXX"
description: "Permissions"
stage: "ALPHA"
includedPermissions:
- storage.buckets.get
- storage.objects.get
- storage.objects.list
- storage.objects.update
- storage.objects.create
Ctrl + X --> Y --> Enter
gcloud iam roles create orca_something_XXX --project $DEVSHELL_PROJECT_ID \
--file role-definition.yaml
Step 2 :
gcloud iam service-accounts create orca-private-cluster-120-sa \
--display-name "orca private cluster 120 sa"
Step 3:
gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID \
--member serviceAccount:orca-private-cluster-120-sa@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role projects/$DEVSHELL_PROJECT_ID/roles/orca_storage_editor_979
gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID \
--member serviceAccount:orca-private-cluster-120-sa@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/monitoring.viewer
gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID \
--member serviceAccount:orca-private-cluster-120-sa@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/monitoring.metricWriter
gcloud projects add-iam-policy-binding $DEVSHELL_PROJECT_ID \
--member serviceAccount:orca-private-cluster-120-sa@$DEVSHELL_PROJECT_ID.iam.gserviceaccount.com --role roles/logging.logWriter
Step 4:
gcloud container clusters create orca-cluster-316 \
--num-nodes 1 --master-ipv4-cidr=172.16.0.64/28 --network orca-build-vpc --subnetwork orca-build-subnet --enable-master-authorized-networks \
--master-authorized-networks 192.168.10.2/32 --enable-ip-alias --enable-private-nodes --enable-private-endpoint \
--service-account orca-private-cluster-120-sa@<Project ID>.iam.gserviceaccount.com --zone us-east1-b
Step 5:
Navigate to Compute Engine in the Cloud Console.
Click on the SSH button for the orca-jumphost instance.
In the SSH window, connect to the private cluster by running the following:
gcloud container clusters get-credentials orca-cluster-316 --internal-ip --zone us-east1-b --project <Project ID>
kubectl create deployment hello-server --image=gcr.io/google-samples/hello-app:1.0
kubectl expose deployment hello-server --name orca-cluster-316 \
--type LoadBalancer --port 80 --target-port 8080
If it helped a little then Pls consider subscribing if possible - TECH_ED
Note : below video is a nice explanation of all the tasks so you can follow up and believe me it's very short :))