Network Checklist
There is a checklist to acknowledge first for the Nimbus version that we are going to deploy today -
EKS cluster will be deployed behind a publicly accessible load balancer (not protected by VPN)
Workspaces will be created under public subnets of chosen VPCs. It will have port 22 (ssh) open to the public (protected by ssh keys)
Additional ports opened in the workspace by engineers will also be public facing
For future deployment versions, Nimbus is fully extensible to adapt any VPN and VPC/subnets setup
Deployment Steps
1. CloudFormation (15~20 min)
Note: the following steps require the AWS policies listed here:
nimbus-deployment-policy.json
5.4KB
Applying the CloudFormation template is quite straight-forward, though AWS will take a quite long time to provide all the resources needed
nimbus.cfn.yaml
19.4KB
- Download the CloudFormation template above. Log in to your AWS Account. Navigate to CloudFormation > Stacks > Create stack > With new resources
- The template is ready > Upload a template file
- Input stack name
- Input
pClusterName
,pDBPassword
, andpDomainName
What Will Be Created?
- A new VPC with public and private subnets
- Route53 HostedZone, for example,
nimbus.company.dev
, the web app will be accessible vianimbus.company.dev
, and the workspace will be accessible viaabcde.company.dropbox.dev
- A new EKS cluster
- Two IAM Roles
- One for the EKS cluster, which contains quite a standard policy
- The other one is for the EKS node group, which contains all permissions that Nimbus needs to operate (policy
nimbus
), and permissions to create a load balancer via helm chart (policyeks
, which we will cover later)
How do I know everything went well?
CloudFormation will run for about 15 minutes to finish. Once it completes, you will see a screen like this:
2. Domain Certificate
Eventually, we’d love engineers to access the Nimbus app via an HTTPS URL in the web, so in this step, we will create a certificate for HTTPS for our Webapp application
AWS Documentation: https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html
- In the AWS console, navigate to Certificate Manager
- Click on the Request button
- Select “Request a public certificate”
- “Fully qualified domain name” ⇒
pDomainName
in the CloudFormation step ”Select validation method” ⇒ DNS ”Tags” ⇒ Input any tags you use to track
- Click on the Request button
- On the certificate details page, find the “Domains” section. Once the “CNAME name” and ”CNAME value” becomes non-empty. Click on “Create records in Route53” > Create records
- While the status is pending validation, you can move to the next step
Why not include the certificate creation in CloudFormation? It’s definitely feasible. However, the domain validation was a bit error-prone when we did testing on our end. If the validation got stuck or failed, the whole CloudFormation stack will be rolled back. With some failures in between, it can lead to a super long time just waiting for the resources to be ready
3. EKS
Once the CloudFormation stack creation is complete, we can move to set up the cluster.
- In your terminal (with the correct AWS credentials), run
aws eks update-kubeconfig --region {region} --name {cluster-name}
Thecluster-name
can be found in the AWS console, or it is{pClusterName}-eks
, wherepClusterName
is the input you use in the last step. This will update the kubeconfig in your local so that you have access to the cluster (as the creator);
- Make sure you have helm installed, otherwise, install helm following https://helm.sh/. Once helm is installed, run
helm repo add eks
https://aws.github.io/eks-charts
helm repo add nimbus
http://helm.usenimbus.com/
helm upgrade --install nimbus nimbus/nimbus -n nimbus --create-namespace --set Host=<domain> --set aws-load-balancer-controller.clusterName=<cluster-name> --set ingress.aws.enabled=true
Thecluster-name
is the same one as above anddomain
is thepDomainName
that you put in as a parameter at the CloudFormation step. This will create an application load balancer for the EKS cluster, and install helm charts for Nimbus.
- Go to Route 53 > Hosted zones >
{your domain}
> Create record Record name ⇒ Leave empty Record type ⇒ “A” Alias ⇒ Toggle on Route Traffic to ⇒ “Alias to Application and Classic Load Balancer” Choose Region ⇒ Your region Choose load balancer ⇒ Find the load balancer create in the last step To find the load balancer: EC2 > Load Balancing > Load Balancers > Find the one created recently, and with the tag “elbv2.k8s.aws/cluster”: “{cluster_name}”
4. Create Secrets and DB Schema Migration
We will walk through this part in the live deployment session
kubectl -n nimbus get pods
kubectl exec -it nimbus-alpine-56f5f9cd64-rvrmv -n nimbus -- ./db_migrate
5. User Authentication
As we mentioned in the last meeting, at the moment, we will use Auth0 to authenticate user logins. We have created an Auth0 tenant, and configured it in the last step. So no action needed for this part
6. Connect Hosted Zone Name Servers with Your Domain Provider
If your team is using a domain provider other than Route53 (like Cloudflare), you will need to create
NS
records in your domain provider to make Route53 accessible. After this is done, the deployment is finished! Congrats and welcome to the Self-Hosted Nimbus!