Skip to content

Guide: Provisioning a new environment in a new AWS account

This guide covers starting from a fresh AWS account and bringing up One Connect on it, including provisioning all the infrastructure.

Assumptions

  1. Dev environment setup has been followed to set up tools that will be used.
  2. A fresh AWS account is available. If there is existing infrastructure provisioned on the account, parts of this guide may not relevant, or may conflict with the existing infrastructure, and this guide will not cover resolving those conflicts.

Brief step by step overview

  1. Clone the oc-infrastructure git repo.
  2. Get an initial AWS account admin user.
  3. Create an access key for the AWS account admin user.
  4. Configure aws-vault for AWS admin user.
  5. Define Terraform variables for new AWS account.
  6. Create Terraform state bucket
  7. Provision initial AWS account resources.
  8. Configure aws-vault for AWS automation user
  9. Create SOPS key
  10. Create ECR mirror
  11. Provision OneConnect infrastructure

Clone the oc-infrastructure git repo

The git repo oc-infrastructure contains terraform scripts to provision One Connect infrastructure. Clone the oc-infrastructure git repo:

git clone git@github.com:alliedtelesis-labs-nz/oc-infrastructure.git

Get an initial AWS account admin user

AWS admin credentials are required to bootstrap the initial AWS account setup, to be used by the CLI tools aws-vault and the aws cli.

These admin credentials may either be for (most commonly) an IAM user, or an AWS identity center user (few have this).

If you don't have an IAM user, or an IAM identity center user, ask your team lead for a IAM user with admin credentials. General instructions for creating an IAM user here and the user will need the AdministratorAccess policy.

Create an access key for the AWS account admin user

An access key for the admin user is needed to run aws cli commands as that admin.

For an IAM user

This consists of an access key ID and an access key secret.

There are two options:

Option A: If you have an IAM user and AWS console access (the AWS website):

  1. Login to your IAM user.
  2. Click the top right drop down menu.
  3. Click "security credentials".
  4. Under "Access keys" click "create access key".
  5. Select "command line interface", check the confirmation box, and click "next".
  6. The description tag can be left blank, click "create access key".
  7. Copy the access key id (called "access key") and the access key secret (called secret access key) somewhere safe, ideally 1password.
  8. Click "done".

Option B: Ask your team lead to create access keys for your IAM user and store them on 1password (and give you access to the vault containing them).

Configure aws-vault for AWS account admin user

aws-vault needs to be configured with credentials for the admin user, to make switching between different users credentials profiles will be used.

General instructions for using aws-vault are on the projects github. Examples for using it are below.

For an IAM user

Setup some bash variables to be used in commands (so they are less hard coded to any specific environment):

AWS_ENV="sandbox" # generally the suffix in the AWS account name (dev, demo, prod, ...)
AWS_USERNAME="bob-admin" # something to identify which user it is for
PROFILE_NAME="${AWS_ENV}-${AWS_USERNAME}" # this is arbitrary, but identifies the credentials associated with it
Check the new profile name:
echo $PROFILE_NAME
sandbox-bob-admin
Add the new profile:
cd ~/oc-infrastructure/infrastructure/bootstrap/aws-account-init/

aws-vault add "${PROFILE_NAME}"
Fill in the requested information to add a profile. The MFA device should be left blank.
Enter Access Key ID: some_access_key_id
Enter Secret Access Key: some_access_key_secret
Enter MFA Device ARN (If MFA is not enabled, leave this blank):
Added credentials to profile "sandbox-bob-admin" in vault

For an IAM identity center user

// TODO

Test aws-vault access to AWS

Double-check the admin user has been configured correctly with aws-vault.

This command executes the aws sts get-caller-identity aws cli command using the aws-vault sandbox-bob-admin credentials (aws-vault exec sandbox-bob-admin). It uses the aws-vault argument --no-session, a helpful command which seems to fix cache related problems when switching between multiple credentials.

aws-vault exec sandbox-bob-admin --no-session aws sts get-caller-identity
Verify the output below contains the expected values:
{
    "UserId": "[user id]",
    "Account": "[AWS account id]",
    "Arn": "arn:aws:iam::[AWS account id]:user/[should match $PROFILE_NAME]"
}

Define Terraform variables for the new AWS account

To make the bootstrap/aws-account-init terraform appliable against multiple AWS account, the parts that change have been extracted into terraform variables. Check bootstrap/aws-account-init/variables.tf to see what variables must be defined for each account. Note: this is just a variable declarations file, not where the actual variables are defined, that is in bootstrap/aws-account-init/vars.

See what files are in bootstrap/aws-account-init/vars:

ls ~/oc-infrastructure/infrastructure/bootstrap/aws-account-init/vars/
common.tfvars  env-demo-tfstate.tfvars  `env-demo.tfvars`  env-dev-tfstate.tfvars  env-dev.tfvars
common.tfvars contains variables shared between all environments, generally this does not need changing.

As an example for an existing environment, demo, env-demo.tfvars is where the variables for the demo environment are stored. env-demo-tfstate.tfvars is a little different, it defines variables that specify where the Terraform state is stored, externally, in an AWS S3 bucket (so that the state is not lost and can be accessed from new locations).

Define a .tfvars for the new AWS account

bootstrap/aws-account-init/variables.tf specifies all the variables that must be defined somewhere. Anything not in bootstrap/aws-account-init/vars/common.tfvars should be defined in the new .tfvars file.

The easiest thing to do is to copy an existing environment's .tfvars file and change variable values for the new account:

AWS_REGION="us-west-2"
AWS_ENV="sandbox"
BUDGET_FOR_NEW_ACCOUNT="50.00"

cat > ~/oc-infrastructure/infrastructure/bootstrap/aws-account-init/vars/env-${AWS_ENV}.tfvars <<EOF
# Global
region    = "${AWS_REGION}"
env       = "${AWS_ENV}"
app_owner = "one-connect-infra-alerts@alliedtelesis.co.nz" # Contact person for this infrastructure

budget_notifications_email = "one-connect-infra-alerts@alliedtelesis.co.nz"
budget_amount_usd          = "${BUDGET_FOR_NEW_ACCOUNT}"

cost_anomaly_threshold_value_usd = "20.00"
EOF
Verify the new environments .tfvars file:
cat ~/oc-infrastructure/infrastructure/bootstrap/aws-account-init/vars/env-${AWS_ENV}.tfvars
# Global
region    = "us-west-2"
env       = "sandbox"
app_owner = "one-connect-infra-alerts@alliedtelesis.co.nz" # Contact person for this infrastructure

budget_notifications_email = "one-connect-infra-alerts@alliedtelesis.co.nz"
budget_amount_usd          = "50.00"

cost_anomaly_threshold_value_usd = "20.00"

Define a -tfstate.tfvars file for the new AWS account

bootstrap/aws-account-init/env-${INFRA_AWS_ENV}-tfstate.tfvars defines variables that specify where the Terraform state is stored, externally, in an AWS S3 bucket.

//TODO: instructions for manually creating this S3 bucket on AWS.

To create a new -tfstate.tfvars file:

INFRA_AWS_ENV="sandbox"
INFRA_AWS_USERNAME="bob-admin"
PROFILE_NAME="${INFRA_AWS_ENV}-${INFRA_AWS_USERNAME}"

PROJECT_APP_NAME="oneconnect"
INFRA_AWS_REGION="us-west-2"
INFRA_AWS_ACCOUNT_ID=`aws-vault exec ${PROFILE_NAME} --no-session -- aws sts get-caller-identity | jq '.Account' | tr -d '"'`

cat > ~/oc-infrastructure/infrastructure/bootstrap/aws-account-init/vars/env-${INFRA_AWS_ENV}-tfstate.tfvars <<EOF
key    = "${PROJECT_APP_NAME}-account-init"
region = "${INFRA_AWS_REGION}"
bucket = "terraform-state-${INFRA_AWS_ACCOUNT_ID}"
EOF
Verify the new environments -tfstate.tfvars file:
cat ~/oc-infrastructure/infrastructure/bootstrap/aws-account-init/vars/env-${INFRA_AWS_ENV}-tfstate.tfvars
key    = "oneconnect-account-init"
region = "us-west-2"
bucket = "terraform-state-123456789123"

Create Terraform state bucket

Retrieve account ID and create an S3 bucket for Terraform state. This s3 file will act as the state file cache for this deployment. (terraform.state file). This is the source of truth for this cluster.

INFRA_AWS_ENV="sandbox"
INFRA_AWS_REGION="us-west-2"

AWS_ENV="sandbox" 
AWS_USERNAME="bob-admin" 
PROFILE_NAME="${AWS_ENV}-${AWS_USERNAME}" 

# We are indexing state buckets by the `terraform-state-<account-id> here
INFRA_AWS_ACCOUNT_ID=`aws-vault exec ${PROFILE_NAME} --no-session -- aws sts get-caller-identity | jq '.Account' | tr -d '"'`
aws-vault exec ${PROFILE_NAME} -- aws s3 mb s3://terraform-state-${INFRA_AWS_ACCOUNT_ID} --region ${INFRA_AWS_REGION}

The Account ID we use here, must match the account ID in each of the vars/env-${INFRA_AWS_ENV}.tfstate.tfvars files associated with this environment. When running terraform init -backend-config=vars/env-${INFRA_AWS_ENV}-tfstate.tfvars, we are telling terraform which state file we want to use.

Provision initial AWS account resources

This covers applying the oc-infrastructure infrastructure/bootstrap/aws-account-init terraform.

The AWS account init project is used to initialize a new AWS account, including creating the IAM user for further automation.

# AWS account IAM user profile for aws-vault
INFRA_AWS_ENV="sandbox"
INFRA_AWS_USERNAME="bob-admin"
PROFILE_NAME="${INFRA_AWS_ENV}-${INFRA_AWS_USERNAME}"

cd ~/oc-infrastructure/infrastructure/bootstrap/aws-account-init

# Initialize state
aws-vault exec ${PROFILE_NAME} --no-session -- terraform init -backend-config=vars/env-${INFRA_AWS_ENV}-tfstate.tfvars

# Plan infrastructure
aws-vault exec ${PROFILE_NAME} --no-session -- terraform plan -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

# Apply infrastructure
aws-vault exec ${PROFILE_NAME} --no-session -- terraform apply -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

# Destroy infrastructure
# aws-vault exec ${PROFILE_NAME} --no-session -- terraform destroy -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

Once the user is created, export IAM credentials for IAM user arn:aws:iam::<account-id>:user/oc-demo-automation and configure them locally as the demo-devops-automation aws-vault profile and the appropriate environment in GitHub Actions.

Configure aws-vault for AWS automation user

Applying the oc-infrastructure infrastructure/bootstrap/aws-account-init terraform, amongst other things, created a new IAM user named oc-${INFRA_AWS_ENV}-automation. This automaton user has more limited credentials and is safer to use for the remaining aws cli operations.

As in create an access key for the AWS account admin user and configure aws-vault for AWS account admin user, create an access key for the new IAM user and add the new credentials under a new profile (oc-${INFRA_AWS_ENV}-automation) to aws-vault.

A region must be configured for some aws commands, so add the region to the credentials profile in ~/.aws/config:

[profile oc-${INFRA_AWS_ENV}-automation]
region=${INFRA_AWS_REGION}

On GitHub Actions, the access and secret key should be configured in GitHub Environments under the repo's Settings.

Create SOPS key

Some background information on sops is in the sops guide.

We will need to add new env-${INFRA_AWS_ENV}-tfstate.tfvars and env-${INFRA_AWS_ENV}.tfvars files for our new environment. The information in these files will be dependent on the deployments goals. However, we muse re-use the state bucket we have just created terraform-state-${INFRA_AWS_ACCOUNT_ID} in env-${INFRA_AWS_ENV}-tfstate.tfvars.

Once these env-specific files have been created, we can now provision the SOPS keys. Note, these env-specific files will need to be created for each of the pipeline steps that are running in terraform. (bootstrap/sops,apps/one-connect,bootstrap/provision-k8s).

INFRA_AWS_ENV="sandbox"
PROFILE_NAME="oc-${INFRA_AWS_ENV}-automation"

PROJECT_APP_NAME="oneconnect"

INFRA_AWS_REGION="us-west-2"
INFRA_AWS_ACCOUNT_ID=`aws-vault exec ${PROFILE_NAME} --no-session -- aws sts get-caller-identity | jq '.Account' | tr -d '"'`

cd ~/oc-infrastructure/infrastructure/bootstrap/sops

cat > ~/oc-infrastructure/infrastructure/bootstrap/sops/vars/env-${INFRA_AWS_ENV}-tfstate.tfvars <<EOF
key    = "${PROJECT_APP_NAME}-sops"
region = "${INFRA_AWS_REGION}"
bucket = "terraform-state-${INFRA_AWS_ACCOUNT_ID}"
EOF

cat > ~/oc-infrastructure/infrastructure/bootstrap/sops/vars/env-${INFRA_AWS_ENV}.tfvars <<EOF
env    = "${INFRA_AWS_ENV}"
region = "${INFRA_AWS_REGION}"
EOF

# Initialize state
aws-vault exec ${PROFILE_NAME} --no-session -- terraform init -backend-config=vars/env-${INFRA_AWS_ENV}-tfstate.tfvars

# Plan infrastructure
aws-vault exec ${PROFILE_NAME} --no-session -- terraform plan -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

# Apply infrastructure
aws-vault exec ${PROFILE_NAME} --no-session -- terraform apply -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

# Destroy infrastructure
aws-vault exec ${PROFILE_NAME} --no-session -- terraform destroy -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars
Outputs:
region = "us-west-2"
sops_kms_arn = "arn:aws:kms:us-west-2:123456789012:key/b0f7a8e3-571e-4a9a-91f8-5ed16138a02a"

Create ECR mirror

Every environment in OneConnect requires its own ECR mirror for application Docker containers.

INFRA_AWS_ENV="sandbox"
PROFILE_NAME="oc-${INFRA_AWS_ENV}-automation"

INFRA_AWS_REGION="us-west-2"
INFRA_AWS_ACCOUNT_ID=`aws-vault exec ${PROFILE_NAME} --no-session -- aws sts get-caller-identity | jq '.Account' | tr -d '"'`

cd ~/oc-infrastructure/infrastructure/bootstrap/ecr-mirror/

cat > ~/oc-infrastructure/infrastructure/bootstrap/ecr-mirror/vars/env-${INFRA_AWS_ENV}-tfstate.tfvars <<EOF
key    = "atlnz-harbor-mirror"
region = "${INFRA_AWS_REGION}"
bucket = "terraform-state-${INFRA_AWS_ACCOUNT_ID}"
EOF

cat > ~/oc-infrastructure/infrastructure/bootstrap/ecr-mirror/vars/env-${INFRA_AWS_ENV}.tfvars <<EOF
# Global
region    = "${INFRA_AWS_REGION}"
env       = "${INFRA_AWS_ENV}"
app_owner = "one-connect-infra-alerts@alliedtelesis.co.nz"

# ECR Repository Configuration
ecr_repositories = [
  "portal",
  "feature-user-backend",
  "feature-flag-backend",
  "feature-asset-management-backend",
  "feature-organisation-backend",
  "feature-device-gui-backend",
  "partner-conference-portal",
]

# ECR Settings
ecr_image_tag_mutability     = "MUTABLE"
ecr_scan_on_push             = false
ecr_lifecycle_policy_enabled = true
ecr_max_image_count          = 100

# Harbor IAM User Configuration
harbor_iam_user_name = "harbor-replication-${INFRA_AWS_ENV}"
EOF

# Initialize state
aws-vault exec ${PROFILE_NAME} --no-session -- terraform init -backend-config=vars/env-${INFRA_AWS_ENV}-tfstate.tfvars

# Plan infrastructure
aws-vault exec ${PROFILE_NAME} --no-session -- terraform plan -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

# Apply infrastructure
aws-vault exec ${PROFILE_NAME} --no-session -- terraform apply -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

# Destroy infrastructure
# aws-vault exec ${PROFILE_NAME} --no-session -- terraform destroy -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

The ECR mirror will create ECR repos for OneConnect applications, as well as an IAM user with the right permissions to push Docker images to these mirrors.

Once the Terraform apply is complete, generate an access key pair for the atlnz-harbor-mirrorharbor-replication-${INFRA_AWS_ENV} IAM user. With these credentials, configure a Registry and Replication job in Harbour.

Provision OneConnect infrastructure

Deploy OneConnect Infrastructure

  • cd ~/oc-infrastructure/infrastructure/apps/one-connect
  • Update env and secrets file name in env-${INFRA_AWS_ENV}.tfvars.
  • Update key name in env-${INFRA_AWS_ENV}-tfstate.tfvars.
  • Update secrets-${INFRA_AWS_ENV}.yaml with the new SOPS key. (SOPS Instructions)

Create infrastructure in Terraform:

INFRA_AWS_ENV="sandbox"
PROFILE_NAME="oc-${INFRA_AWS_ENV}-automation"

PROJECT_APP_NAME="oneconnect"

INFRA_AWS_REGION="us-west-2"
INFRA_AWS_ACCOUNT_ID=`aws-vault exec ${PROFILE_NAME} --no-session -- aws sts get-caller-identity | jq '.Account' | tr -d '"'`

cd ~/oc-infrastructure/infrastructure/apps/one-connect

cat > ~/oc-infrastructure/infrastructure/apps/one-connect/vars/env-${INFRA_AWS_ENV}-tfstate.tfvars <<EOF
key    = "${PROJECT_APP_NAME}-${INFRA_AWS_ENV}"
region = "${INFRA_AWS_REGION}"
bucket = "terraform-state-${INFRA_AWS_ACCOUNT_ID}"
EOF

cat > ~/oc-infrastructure/infrastructure/apps/one-connect/vars/env-${INFRA_AWS_ENV}.tfvars <<EOF
# Global
region                        = "${INFRA_AWS_REGION}"
env                           = "${INFRA_AWS_ENV}"
app_owner                     = "one-connect-infra-alerts@alliedtelesis.co.nz" # Contact person for this infrastructure
secrets-file                  = "vars/secrets-${INFRA_AWS_ENV}.yaml"
sops_key_alias                = "alias/oc-${INFRA_AWS_REGION}-${INFRA_AWS_ENV}-sops"
devops_automation_policy_name = "oc-${INFRA_AWS_ENV}-automation"

# Networking
region_az_count            = 3
vpc_cidr                   = "10.20.0.0/16"
vpc_create_db_subnet_group = false
vpc_has_nat_gateway        = true
vpc_private_subnets        = ["10.20.0.0/18", "10.20.128.0/19", "10.20.160.0/19"]
vpc_public_subnets         = ["10.20.64.0/18", "10.20.192.0/19", "10.20.224.0/19"]
vpc_single_nat_gateway     = true
vpc_disable_observability  = true

# Bastion
bastion_instance_type     = "t3.medium"
bastion_disk_size         = 20
delete_ebs_on_termination = true

# EKS
eks_desired_nodes       = 1
eks_max_nodes           = 5
eks_min_nodes           = 1
eks_node_instance_types = ["t3.medium"]
kubernetes_version      = 1.34
endpoint_public_access  = false
eks_volume_size         = 20
EOF

# Initialize state
aws-vault exec ${PROFILE_NAME} --no-session -- terraform init -backend-config=vars/env-${INFRA_AWS_ENV}-tfstate.tfvars

# Plan infrastructure
aws-vault exec ${PROFILE_NAME} --no-session -- terraform plan -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

# Apply infrastructure
aws-vault exec ${PROFILE_NAME} --no-session -- terraform apply -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

Add Nameserver records to main domain

The above step will create a hosted zone for ${INFRA_AWS_ENV}.alliedtelesistest.com. The NS records for this hosted zone will need to be added to alliedtelesistest.com in the dev account.

INFRA_AWS_ENV="sandbox"
PROFILE_NAME="oc-${INFRA_AWS_ENV}-automation"

DEV_PROFILE_NAME="oc-dev-automation"

INFRA_PARENT_DOMAIN="alliedtelesistest.com"
INFRA_NEW_SUBDOMAIN="${INFRA_AWS_ENV}.${INFRA_PARENT_DOMAIN}"

# Get env hosted zone ID
hosted_zone_id=`aws-vault exec ${PROFILE_NAME} --no-session -- aws route53 list-hosted-zones | jq '.HostedZones | .[] | select(.Name == "'${INFRA_NEW_SUBDOMAIN}'.") | .Id' | tr -d '"' |  sed 's#^/hostedzone/##'`

# Fetch nameservers
nameservers=`aws-vault exec ${PROFILE_NAME} --no-session -- aws route53 get-hosted-zone --id $hosted_zone_id | jq '.DelegationSet.NameServers | join(" ")' | tr -d '"'`

nameservers_r53=""
for name in $nameservers; do nameservers_r53=$nameservers_r53'{"Value":"'$name'."},'; done
nameservers_r53=`echo $nameservers_r53 | sed 's/,$//'`

# Get parent hosted zone ID
parent_domain_zone_id=`aws-vault exec ${DEV_PROFILE_NAME} --no-session -- aws route53 list-hosted-zones | jq '.HostedZones | .[] | select(.Name == "'${INFRA_PARENT_DOMAIN}'.") | .Id' | tr -d '"' |  sed 's#^/hostedzone/##'`

# Update NS records
aws-vault exec ${DEV_PROFILE_NAME} --no-session -- aws route53 change-resource-record-sets \
  --hosted-zone-id $parent_domain_zone_id \
  --change-batch '{
      "Changes": [
          {
              "Action": "UPSERT",
              "ResourceRecordSet": {
                  "Name": "'${INFRA_NEW_SUBDOMAIN}'",
                  "Type": "NS",
                  "TTL": 172800,
                  "ResourceRecords": [
                      '$nameservers_r53'
                  ]
              }
          }
      ]
  }'

Provision EKS cluster

Since our EKS cluster is in a private subnet with private API access, provisioning the cluster itself needs a Bastion with private endpoint access to our cluster.

  • Update env and secrets file name in env-${INFRA_AWS_ENV}.tfvars.
  • Update key name in env-${INFRA_AWS_ENV}-tfstate.tfvars.
  • Update secrets-${INFRA_AWS_ENV}.yaml with the new SOPS key. (SOPS Instructions)

Get the ssh private key for the jump (also called bastian) server from 1password (or have created a new one, but documentation for that is lacking)

  1. In our 1password, access the OC-Dev vault (talk to a team lead if you do not have access).
  2. Go to the "OneConnect EC2 Key Pair" item
  3. Copy the "private key". This can be hard to do wth 1passwords interface. One way is to click 'reveal' in the drop down, and then inspect the html element and copy the private key from dev tools.
  4. Paste the "private key" to your local machines ~/.ssh/ folder. Name it jump.${INFRA_NEW_SUBDOMAIN}

Restrict the private key file permissions:

chmod 600 ~/.ssh/jump.${INFRA_NEW_SUBDOMAIN}

For convenience, the ssh configuration for the jump server can be added to ~/.ssh/config:

INFRA_AWS_ENV="sandbox"
INFRA_PARENT_DOMAIN="alliedtelesistest.com"
INFRA_NEW_SUBDOMAIN="${INFRA_AWS_ENV}.${INFRA_PARENT_DOMAIN}"

cat >> ~/.ssh/config <<EOF

Host jump.${INFRA_NEW_SUBDOMAIN}
    HostName jump.${INFRA_NEW_SUBDOMAIN}
    User ubuntu
    IdentityFile /home/$USER/.ssh/jump.${INFRA_NEW_SUBDOMAIN}
    IdentitiesOnly=yes
EOF

Copy provisioning files over to new jump server:

INFRA_AWS_ENV="sandbox"
INFRA_PARENT_DOMAIN="alliedtelesistest.com"
INFRA_NEW_SUBDOMAIN="${INFRA_AWS_ENV}.${INFRA_PARENT_DOMAIN}"

cd ~/oc-infrastructure/infrastructure/bootstrap
rsync -chavzP --stats --exclude='.git' provision-k8s ubuntu@jump.${INFRA_NEW_SUBDOMAIN}:/home/ubuntu/

# //TODO fix the line below, it is missing a lot of information
# scp ~/oc-infrastructure/infrastructure/bootstrap/provision-k8s/vars/secrets-${INFRA_AWS_ENV}.yaml ubuntu@jump.${INFRA_NEW_SUBDOMAIN}:/home/ubuntu/provision-k8s/vars/secrets-${INFRA_AWS_ENV}.yaml

Log in to jump server:

ssh ubuntu@jump.${INFRA_NEW_SUBDOMAIN}

Then, on the jump server:

INFRA_AWS_ENV="sandbox"
INFRA_AWS_REGION="us-west-2"
INFRA_AWS_ACCOUNT_ID=`aws-vault exec ${PROFILE_NAME} --no-session -- aws sts get-caller-identity | jq '.Account' | tr -d '"'`

aws eks update-kubeconfig --region ${INFRA_AWS_REGION} --name oc-${INFRA_AWS_ENV}-eks
cd /home/ubuntu/provision-k8s
sudo tfenv use 1.7.0

# Create the `vars/env-${INFRA_AWS_ENV}.tfvars` and `vars/env${INFRA_AWS_ENV}-tfstate.tfvars` files
cat > /home/ubuntu/provision-k8s/vars/env-${INFRA_AWS_ENV}-tfstate.tfvars <<EOF
key    = "oc-provision-k8s-${INFRA_AWS_ENV}"
region = "${INFRA_AWS_REGION}"
bucket = "terraform-state-${INFRA_AWS_ACCOUNT_ID}"
EOF

cat > /home/ubuntu/provision-k8s/vars/env-${INFRA_AWS_ENV}.tfvars <<EOF
# Global
region                          = "${INFRA_AWS_REGION}"
env                             = "${INFRA_AWS_ENV}"
app_owner                       = "one-connect-infra-alerts@alliedtelesis.co.nz" # Contact person for this infrastructure
secrets-file                    = "vars/secrets-${INFRA_AWS_ENV}.yaml"
oc_eks_cluster_name             = "oc-${INFRA_AWS_ENV}-eks"
oc_eks_alb_role_name            = "oc-${INFRA_AWS_ENV}-eks-alb-controller-role"
oc_eks_external_dns_role_name   = "oc-${INFRA_AWS_ENV}-eks-external-dns-role"
oc_eks_alb_sg_name              = "oc-${INFRA_AWS_ENV}-eks-alb-controller-sg"
oc_eks_asg_autoscaler_role_name = "oc-${INFRA_AWS_ENV}-eks-autoscaler-role"
oc_hosted_zone                  = "${INFRA_AWS_ENV}.alliedtelesistest.com"
oc_acm_domain                   = "*.${INFRA_AWS_ENV}.alliedtelesistest.com"
argocd_ha_mode                  = false
argocd_host                     = "argo.${INFRA_AWS_ENV}.alliedtelesistest.com"
argocd_system_apps_path         = "deployments/oc-system/env-aws-${INFRA_AWS_ENV}"
argocd_oc_apps_path             = "deployments/oc-apps/env-aws-${INFRA_AWS_ENV}"
oc_host                         = "oneconnect.${INFRA_AWS_ENV}.alliedtelesistest.com"
EOF

# Initialize state
terraform init -backend-config=vars/env-${INFRA_AWS_ENV}-tfstate.tfvars

# Plan infrastructure
terraform plan -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

# Apply infrastructure
terraform apply -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars

# Destroy infrastructure
# terraform destroy -var-file=vars/common.tfvars -var-file=vars/env-${INFRA_AWS_ENV}.tfvars