Only this pageAll pages
Powered by GitBook
1 of 27

Infrastructure

Loading...

Loading...

Cloud Basics

Loading...

Loading...

Loading...

Loading...

Docker Basics

Loading...

Loading...

Github

Loading...

Golang

Loading...

HashiCorp Vault

Loading...

Linux Basics

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Terraform

Loading...

Project Reclass Infrastructure

Welcome to the Project Reclass infrastructure documentation page! This will be the go-to resource for all things regarding Reclass infrastructure

Cloud BasicsDocker BasicsGithubGolangHashiCorp VaultLinux BasicsTerraform Basics

Cloud Basics

The Basics of getting started with AWS

What is the cloud?

There are two types of clouds those made mostly of water, and those made mostly of Linux servers, we'll be using the latter. In generally, "the cloud" is just a physical server hosted, paid for, and managed by another entity. Virtualization is used to split the resources a physical server to maximum use, and allows multiple entities to build their products on the same physical devices.

Giants such as Google, Microsoft, and Amazon have massive data centers which allow them to promise a higher average uptime than most smaller companies and even individuals to start utilizing their resources for little to no cost.

Why the cloud?

The cloud has little to know upfront costs, automates a lot of otherwise difficult issues, as well as offers an all in one approach making it easy to pivot to new technology or even simply pay for the services or management of something your team may not be prepared to implement themselves.

I firmly believe the cloud is the future and most organizations should move to a hybrid-cloud configuration. The cloud offers more efficiency, reliability, and optimization for the comparable price.

How do I get started with AWS

You can create an account and start creating virtual machines by following the .

How to use the aws cli?

The awscli is one of the most powerful tools that AWS has to offer and offers many options that may be unavailable or non-existent within the graphical environment. After you've created an accoutn you can create a you need to perform operations.

After which you'll need to create and grab the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . You will add these credentials to your machine by running:

You'll also have the option to enter the region and default preferred output.

Frequently Asked Questions

Questions that many might frequently have:

What does the Reclass Infrastructure look like?

The Infrastructure for project reclass is hosted on AWS. This infrastructure hosts the main website: projectreclass.org as well as the main product at: toynet.projectreclass.org

All of the instances, networking, and services are built out with Terraform the only resources not built via Terraform are the route53 dns entries - the majority of which are and must be managed manually.

While the main website is a simple wordpress site hosted on an EC2 instance and named with an A record on Route53; the toynet infrastructure is a bit more complex this diagram presents a graphical representation of the toynet infrastructure.

The above is a basic represnetation of the toynet infrastructure, it does not include smaller details such as subnets, availability zones, how access is managed, etc.

What is toynet built on?

Toynet is built on top of the mininet technology. The toynet application lives in a docker container and is deployed in an EC2 instance via docker-compose.

How can I push my Terraform code to AWS?

In order to create resources on AWS you will need a vaild access key on your machine, Terraform will automatically read these keys for you as long as your enter them when you run aws configure

The above illustrates the default for an AWS configure, our default region is ohio or us-east-2 and our default output format is json.

Additionally, you'll see the last 4 characters correspond to an AWS Access Key ID and an AWS Secret Access Key yours will be different.

How do I obtain an AWS Access Key?

In order to obtain an AWS access key you'll need an AWS account, access keys can be managed in the IAM section of AWS. Ensure you save the file or store your access keys in a safe place as they will only be viewable simultaneously, once.

Need help logging into a server?

SSH credentials are managed by via AWS OpsWorks, this will allow each user to specify their public key, and utilize their own private key to access each server in a stack!

Ensure the server you're attempting to access is in the stack, your user has the public key uploaded, and that the target operating system is a

Further, ensure your user has proper Instance Access

Accessing these docs

This is the official documentation page for Project Reclass. These docs can be accessed via or they can be accessed via the gitbooks slack integration. First ensure that the desired space has the integration enabled, this is in the documents section for each space and correleates to the overall space such as the or .

Once the space has been enabled and added to a primary channel - this is for gitbook admin alerts - you can access these docs from any channel in the slack organization.

To do this simply run /gitbook [search query] this will send a message to everyone in the channel with the results. You can also do this in private messages, even with yourself.

official aws docs
user and assign it the permissions
aws configure
supported distro
docs.projectreclass.org
infrastructure
home page
Basic Representation of Toynet Infra
Ensure User has Proper Instance access
Enable Slack Integration for the space
Results of a `/gitbook` search

How to migrate to a new AWS Account

This guide will cover the steps I took to migrate from one AWSAccount to another specifically for Project Reclass Infrastructure.

Migrating the projectreclass.org domain

Beginning April 2021 AWS no longer allows domain transfer between AWS accounts via GUI. Thereby, the aws CLI must be utilized. Utilize this guide to install the AWS CLI.

Configure access to AWS. Ensure the permissions allow you to make changes to Route53

Initiate the domain transfer:

Replace the above variables with the domain you wish to transfer (should be owned by your original AWS account) and the account ID of the new account (the account you want to take ownership of the domain)

The above should output something similar to the following:

{

"": "string",

"": "string"

}

First you'll need to run aws configure again in order to switch to the accepting account

Next you must accept the transfer

Your password may include a quote character, it is recommended to wrap the password string in single quotes to properly pass it to AWS

After the domain transfer has been accepted, you'll need to create a new hosted zone or .

It is important to note that transferring a domain does not break any existing DNS records, this is due to the nameservers still being owned and operated by the original hosted zone, and by extension Route53. AWS explicitly tells us that the domain adn the DNS records do not need to be owned by the same account for routing to occur. While importing the hosted zone is likely the easiest way to migrate the domain, you may want to migrate and create records at your own pace. If this is the case remember you'll need to create a new hosted zone for the domain, remember to update the NS for the domain under Route53 -> Domains -> ${DOMAIN-NAME} in the graphical environment.

Migrating the projectreclass.org website

In order to migrate the website in it's current state, create an AMI of the EC2 instance in which it is hosted, make this AMI private.

To create the AMI go to the EC2 list select the server you'd like to make an image from and go to Actions -> images and templates -> create image

Fill out the AMI image name and descritpion, enable no reboot if uptime is a must and then click "create image":

You'll be able to find a list of all AMI images on the sidebar under Images -> AMI

Finally select your AMI go to the permissions, and allow access to the new AWS Account by entering the new Account ID. Then you'll be able to access and launch the image. It'll be under the same sidebar Images -> AMI and under the "Private" filter

If your Image requires a subscription in our case bitnami/wordpress that must be accepted before the image can be launched.

Once the domain and image have been properly trasnferred add an A record pointing the domain projectreclass.org -> ${ip.ip.ip.ip} to the new IP address of the newly launched server

It is recommended to first attach an Elastic IP with the ability to reassociate and attach it to the server and point it the domain at that. This will allow you to quickly reassociate the IP without having to change records in the event you want to change the endpoint of projectreclass.org.

Migrating the Toynet Infrastructure

Assuming everything has gone well thus far, migrating toynet should be simple. Toynet is deployed automatically utilizing Terraform code.

Ensure your aws configure is set to the new account

Terraform code will also create the DNS records

It is advised that you check the cost of this infrastructure prior terraform applyby doing the following: terraform plan -out=plan.tfplan && terraform show -json plan.tfplan > plan.json

And uploading the plan.json to the

$ aws route53domains transfer-domain-to-another-aws-account --domain-name projectreclass.org --account-id ${YOUR-NEW-ACCOUNT-ID}
You'll need the above password to accept the domain.
OperationId
Password
import the old hosted zone into the new account
Refer to this guide to learn how to attach an EIP
terraform cost estimator
Creating the AMI image from an existing server
Configuring details for the AMI
Where to find your new AMI
Allowing another account to access your private AMI
$ aws route53domains accept-domain-transfer-from-another-aws-account \
    --domain-name projectreclass.org \
    --password ${password}
 github clone https://github.com/Project-Reclass/infrastructure.git
 cd ./infrastructure/terraform/toynet/production
 terraform init
 terraform apply toynet-deployment-production.tf

AWS OpsWorks

How to configure AWS Ops Works

Ensure any Instance you create is on the approved list for AWS Ops Works

Register an Instance

Ensure your IAM user has the proper policy configuration to register an instance. This can be a preexisting instance. I performed the registration via command line so I needed the permission: AWSOpsWorksRegisterCLI_EC2

Next, I ran the registration command:

When you register an instance through the AWS Ops Works console, a command will be created for you that will require some editing.

I actually removed the use-instance-profile option to allow AWS Ops Works to create a new user for registration.

Once your instance is registered ensure your IAM user has been uploaded to the stack in the user section:

From this page you can also edit your user, change the permissions to allow SSH access, as well as sudo permissions if necessary. Finally, you can upload your own public key to this user!

You can change your public key as you wish, this makes it easy to quickly gain access and create new keys if necessary.

Administrators should ensure proper offboarding to include deleting users not currently requiring access to AWS Ops Works. Access keys should also have lifetimes, and offboarded users should have their access keys and accounts deleted

Using an Instance Profile

If you want to register your instance with the use-instance-profile argument in the command above, you must create an as an IAM role and assign it to both the instance you are registering and the stack. (You can add it to a stack that already exists by editing its settings, or include this information in your new stack under "Default IAM Instance Profile".)

Ensure that the IAM role has permission to register an instance in Ops Works, and is the same for the instance and the stack. AWS Ops Works will use this profile for registration instead of creating a new one.

Running your first shell script

We'll break down the components of our first script. If you do not have an editor please refer to the getting started with vim guide below

Our first script

First open your favorite text editor and create file named hello.sh

Save the file and exit your text editor.

Terraform Basics

This will be a brief overview on how to create resources in AWS with Terraform

Set Access Credentials for AWS

Configure your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY with the following

Intro to Docker

What is docker? For that matter what is a container? Find out here.

What is Docker?

Docker is a container runtime, ultimately run on containerd. Docker is a packaged and more user friendly version with a graphical interface as well as a command line interface suite. Although it has many challengers and potential usurpers, docker remains the go-to tool for containers.

What is a command?

Basic description of command structures

What is a command?

A command is tool in the terminal that typically accepts options and arguments. The syntax is generally command [options] (arguments) many commands can be run with or without arguments and most can be run without options in this instance we'll utilize ls as our example.

ls 
# without any arguments this list all files in the current directory
# output will look like the following: 
directory1 directory2 file1 file2 

# ls can be run with options one example is the all option
# all can be run as such 

`ls --all` OR `ls -a`
# Options typically have long and short forms. The short form is called 
# with a single `-` while the long form is called with a double `--`
# without an argument this will still list items in the current directory
# however this option will also show hidden files
# the ouput will be similar to the following: 
.bashrc .hiddendirectory .hiddenfile .ssh directory1 directory2 file1 file2

# you can also call ls with both options and an argument let's look into 
# our directory1 directory 

ls -a directory1
# this is an empty directory so the only results will be the directory
# itself `.` and the reference to the directory before it `..`
./ ../ 

ls lists files and directories within the current directory it has many options and is a good practice command as it won't break anything. If you want to list the contents of a directory that you are not it use an argument that refers to the desired directory for example from the /home/theo directory I can list the contents of Desktop by running ls Desktop/

To find out more about this command or any other utiliize man. From your terminal run man ls

Refer to how to access AWS on notes for getting an Access Key and Secret

This example terraform code should give you an idea of how to create resources in this case we'll be making an EC2 Instance

main.tf is the main file terraform runs by default. Create the main.tf file fill it with your Terraform code. Afterwards run: terraform init followed by terraform apply to create your resources.

aws configure
main.tf
# Set your provider and region in this case AWS and Ohio 
provider "aws" {
    region = "us-east-2"
}

resource "aws_instance" "my-first-instance" { # resource first then custom name
    ami           = "123gd4df5678a" # image ID is necessary to tell AWS what to build
    instance_type = "t2.nano" # this is the size of the VM. This size is free tier
    key_name      = "My-first-key" # this will tell AWS which key to use, it must already exist
}
    
    # It's important for readability to have equal signs in a resource align
    # Not doing so may cause issues in running this code
    
    # Save and exit this file and run `terraform init` followed by `terraform apply main.tf`
    
What is a container?

An oversimplification is that a container can be thought of as a much smaller VM. In general VM's require virtualization at the hardware level, hosts an entire OS complete with it's own kernel. A container on the other hand can share resources with it's hosts, leverages user-space a bit better, and can often run only what is necessary for the application. Ultimately, a container is more portable, and consistent than a VM and helps to eliminate the "it worked on my machine" issue.

How do I learn to use Docker containers?

First you'll need to install docker to ensure docker has installed properly you can run your first container with.

This will take you through the official guide for docker, and truthfully it's a great tool! It should be accessible via localhost in your browser.

docker run -d -p 80:80 docker/getting-started
What is a command?
In order to run scripts they need to be executable, this means the permissions of the file need to be change. Every file can have read, write, and execute permissions. By default files are not executable on creation.

The following commands can be copy and pasted but will only work if you are in the directory in which the hello.sh script exists. Utilize the ls command to check if the script is in your current directory and the cd command to change to a new one.

We'll update our file permission to make the file executable:

Once the script has been made executable it can be run

Expected Output

If you do not recieve the expected output double check the directory, and the permissions of the file, the permissions for the file should be as such.

Permissions of hello.sh script

What we're looking for in this case is the x on the left side of the permissions. For more info on permissions refer to the permissions page below.

hello.sh
#!/bin/bash # Every bash script needs to start with a shebang line 
# like the one above this tells the OS what shell to use
echo "Hello World" # The echo command prints to stdout 
# Meaning it'll print whatever follows it to the terminal
Getting started with vim
Understanding Linux Permissions
instance profile
Import IAM User

Hello, Project Reclass

How to create your first hello world program in Go!

Why Go?

Go is a powerful language that powers the infrastructure that Project Reclass runs on. Docker, Kubernetes, Terraform, Vault, and even our chatbots are written in Go. Having even a basic understanding of the language can assist in making important infrastructure decisions, and interacting with the tools we utilize. Go also has a very large, friendly and diverse community.

chmod +x hello.sh
# chmod stands for change mode the +x option adds execute permission
# and our argument is the script we created "hello.sh"
./hello.sh
# scripts are run by referring to the file 
# the `./` means from the current directory that is the starting point
# if the hello.sh script is in the current directory it will run
# the output of the script should be "Hello World"
$ aws opsworks register --use-instance-profile  \
--infrastructure-class ec2 \
 --region us-east-2  \
 --stack-id a08f26f4-4362-4f34-9d57-71492e210e43 \
 --ssh-username [username] \
 --ssh-private-key [private-key-path] i-0aa4b421d6fe86cb8
Installation

First you'll need to install Golang, you can do so here. Go is often also found in most package managers so you can install it using the default manager for your system.

Follow the below instructions to properly finish setup on your Linux System

Package managers may handle the path setting for you. Check this with echo $PATH

Writing the Code

Create a file named main.go and add the following:

Save your code and run go run main.go

More Resources

There are several solid resources for go the docs page being a great goto

Other tools I like are gobyexample and Learn Go with Tests

Finally, check out the Go Playground and Play with Go

rm -rf /usr/local/go && tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz
# Remove any previous versions of go and unzip the new version

export PATH=$PATH:/usr/local/go/bin
# Add Go to your $PATH

go version
# You should be able to run this from anyway and get the most recent version
// this is a comment

package main // You'll always need a package name this is usually main

import (
    "fmt" // "fmt" is the format package and contains the Println func
)

func main() { // You'll need a main function to run your code
    fmt.Println("Hello, Project Reclass") //Println prints to stdout and formats
}

Setting up SSL with Bitnami

SSL can be configured with Bitnami. This is necessary to have secured traffic on the main website projectreclass.org

Whether you've created a new wordpress site from scratch via the guide from our friend Hiro who runs awsnewbies. Or you've just made a new AMI from our existing projectreclass.org SSL is a must. It is necessary to securely send data over HTTPS rather than HTTP, not only is it best practice but it improves our visibility and our reputation

If you are migrating or standing up a new image, and are utilizing an Elastic IP the certificate will have already been created and you won't need to recreate it.

Bitnami allows us to configure SSL utilizing Let's Encrypt with their built in tool bncert-tool. Configuring SSL is as easy as running this tool.

Step 1: Open your terminal and SSH into your server

Ensure you're in the same directory as your "my-key.pem" replace "my.server's.ip.address" with the Server's public IP address

Step 2: Run the bncert-tool

Access the tool by utilizing the absolute path

Step 3: Configure the root domain and the subdomain "www."

When prompted enter the following

Step 4: HTTP redirection to HTTPS

The next step will ask if you'd like to forward http requests to https. The answer is yes type y and press Enter

Step 5: Domain vs Subdomain redirect

This next portion will ask if you'd like non-www to redirect to www and vice versa. You can only say yes to one of these options. The decision is yours. I like to redirect www requests to non-www domain. This will result in the following behavior.

If a user types www.projectreclass.org the domain will redirect to projectreclass.org If you'd like you can enable the opposite behavior during this step.

Step 6: Just say Yes!

Bitnami will then tell you it needs to restart the server for the changes to take effect it should only bring the site down for a minute or so. Once it's brought back up you'll see the SSL confirmation you're seeking. Bitnami will also ask for a email address to reference, in this case the address is: [email protected]

Super User

A brief description of the super user known as root

Who or what is a super user?

In Linux the super user or administrator account is called root. By default when configuring a system the default user will have the ability to become root.

In the linux terminal the dollar sign $

Getting started with vim

Vim is an in terminal text editor. The improved version of vi. One of these variants is often preinstalled on every linux host.

How to use vim

Vim allows you edit files, it'll also create any files that don't exist and open them for editing. Typing vim [filename] is the same as typing touch [filename] followed by vim [filename] you still open an empty file for editing. Let's start by creating and editing a file.

sudo ssh -i my-key.pem [email protected]'s.ip.address
demonstrates a regular user

Whereas the hash# illustrates the root user

the # also denotes a comment in shell scripting so try not to be confused

How do I become root?

There are multiple ways to become or imitate root.

You can become root by running sudo su - or sudo -i

However, because the root account has not limitations; critical issues can occur from simple typos as such it is preferred to imitate or impersonate root.

The preferred method is to utilize sudo before your commands to run them with root priviliges as needed. Avoid becoming root whenever possible instead do something like:

Why can't I run sudo?

If you're attempting to utilize sudo but are running into an error that says something along the lines of :user is not in the sudoers file this incident will be reported then this indicates that your user is not in the proper group to have sudo permissions. Please note that you may alter the sudoers file utilizing visudo to add your user but that is not advised.

Instead you'll need to become root by typing su - and entering root's password - not your user's password.

Once you have successfully authenticated and become root you'll need to add your user to the proper group:

sudo /opt/bitnami/bncert-tool
projectreclass.org www.projectreclass.org
$ sudo touch files.txt # preferred method
# please avoid doing the below: 
$ sudo su -
# <- root user icon
# touch files.txt
^ the above is an unnecessary use of the root account. 
# To add your user to the proper group become root and run the following:
# For Redhat based distros
usermod -aG wheel $USER

# For Debian based distros
usermod -aG sudo $USER

# where "$USER" is your username. Afterwards type exit to return to $USER

We'll be writing a simple script using vim. Vim has two basic modes - a command mode and an Insert mode. Insert mode allows us to write code and make changes to a file. While command mode allows us to change things about our vim session, issue Unix and Linux commands as well as save and or quit the vim session.

There's also a visual mode but we won't be using that here.

Now that we're in our file we need to enter insert mode this is simple within vim but not intuitive. Simply press i . This will change us from command mode to insert mode we'll type two things: our shebang line to inform our OS how to execute our script #!/bin/bash and our code underneath it echo "Hello World"

So now that we've written our first script. How do we save it? You'll first need to enter command mode to do this press the [escape] key on your keyboard. If you do not have one for some reason you can hold the [control] + [ key at the same time.

Once you've entered command mode you won't be able to edit text in the same way. There are several keybindings so don't start pressing random keys.

Once you've entered command mode you'll need to type the colon key : this should show up at the bottom of the editor.

Ensure the colon is present

If you do not see the colon continue attempting to enter the command mode and enter the colon until it appears

Once the colon is present you can type wq to save your changes and exit vim.

vim hello.sh

How to move around the filesystem

Basic commands necessary for utilizing a linux terminal

The filesystem is the configuration of your system, in linux everything is a file, this makes it easier to interact with all resources on your system, everything will have permissions, and be able to be interacted with in the same basic ways. In order to successfully utilize the terminal you'll need to understand how to traverse the filesystem.

Begin by opening a terminal. The application should already be installed on your machine. The first thing you'll see is the following:

$ # The dollar sign indicates that you are a regular user

Normally you'll be a regular user this is indicated by the $ occasionally you'll need to become a super user (called root) - you can think of this as the system's admin account. This user has no limits so you must be careful when you become root.

The # sign in front of text such as this indicates a comment.

Who are you?

How to see where you are:

What is in here?

How to change locations

How to create a file:

How to remove a file:

How to go back a directory:

rm may or may not prompt you for comfirmation to ensure you want to permananently delete the file, this is a nice feature but isn't the default eveyrwhere, so ensure you're in the right directory with pwd and use ls to ensure it's the proper file.

How to SSH

SSH is secure shell it is an industry standard for accessing remote systems. SSH works by comparing the requestor's private key with the server's known public key. And is preferred over user/pass auth

Using a Terminal

First open a terminal. You'll need to know or posess the credentials for a user on the remote machine you're attempting to reach

Use the IP address to avoid any potential DNS issues. Replace the 127.0.0.1 with the IP address of the target remote machine.

The default user on various AWS instances are as follows:

Ubuntu Image default user - ubuntu

Amazon-Linux Image default user - ec2-user

Wordpress with Bitnami (Website server) - bitnami

Within AWS you'll need an ssh key. You can create and download this via the AWS GUI when launching an instance. When you download your .pem file it'll show up in your downloads folder it is recommended you move this to the .ssh directory and updating the permissions. If the permissions aren't updated you won't be able utilize the private key.

SSH will not use keys that are too permissive. It must be read/write by the owner only at a maximum.

Replace the key.pem file to the name of the .pem file you downloaded

Replace the 127.0.0.1 to the ip of the target machine.

How to SSH with Putty

You should only need to utilize putty in the absence of a terminal. Terminals are built in to all operating systems except Windows. Putty is specifically for ssh from a windows client. All other OS (i.e. linux, bsd, macOS) should follow the instructions above

Within AWS you'll need an ssh key. You can create and download this via the AWS GUI when launching an instance. When you download your .pem file it'll show up in your downloads folder it is recommended you move this to the .ssh directory and updating the permissions. If the permissions aren't updated you won't be able utilize the private key.

You'll also want to grab PuTTyGen as well, AWS will provide you with a .pem file. In order to utilize PuTTy for SSH you'll need to convert this file to a private key.

Open PuTTyGen upload your .pem file from AWS and split the key into a public/private key pair. Store this someone easy to remember, but secure as you'll add the file path to Putty.

Once you've downloaded and launched Putty. Go to the sidebar and expand the SSH and Auth options. You'll then click browse and navigate the location of your newly created privatekey.ppk file.

Afterwards navigate back to the Session tab. You'll be able to enter the IP address of the server under the "Host Name (or IP address) section as well as the SSH port under the "Port" section. For us it'll be the default of 22. Ensure your connection type is SSH

It is recommended that before clicking open you press save to save these settings, you'll also have the option to enter a name for these settings inside of the "Saved Sessions" box

Click open to launch the server and enter the username associated to your key.

The default user on various AWS instances are as follows:

Ubuntu Image default user - ubuntu

Amazon-Linux Image default user - ec2-user

Wordpress with Bitnami (Website server) - bitnami

Buddy Bot

Instructions on using our most important tool

What is Buddy Bot?

Buddy bot is a lightweight tool written in Typescript that reminds Reclass members to clock in and out with style.

How to use buddy bot

Like all our greatest tools, buddy bot runs in a docker container and can be pulled from our repo:

Running Buddy Bot

First you'll need to mount a file to buddy bot you can use our default config by running wget on the following:

From here you can mount the file to image:

The above runs a new container with docker run

The -d runs the container in the background

The -v flag sets the volume option to be mounted. This creates a directory but can be a directory or file

Afterwards we specify the file location to mounted, and a place to mount it within the container $(pwd)/default-config.json:/app/dist/default-config.json

hello.sh
#!/bin/bash # This is a shebang it tells our OS how to execute our script
echo "Hello World" # echo outputs data to stdout aka prints to the shell
ssh [email protected]
whoami
# Now that we know we're a regular user wwe should find out who that is
# In order to understand who you are simply type: `whoami`
# The output should be the user you're logged in as in my case its:
theo
pwd
# this shows you your current location 
# `pwd` stands for "print working directory"
# The output should be your home directory and should look like:
/home/theo/

Then, we add our slack token so it can talk to the channels specified the -e $SLACK_TOKEN creates an environment variable within the container

Next, we name the container buddy_bot with --name buddy_bot

Finally, this command requires an image for an argument in this case it can be the image ID or the image and tag. We use the image name and tag for simplcity: projectreclass/buddy-bot:latest

https://raw.githubusercontent.com/Project-Reclass/buddy-bot-slack/master/default-config.json
ls
# ls stands for list and shows you everything in the current directory
# the output will look something like this:
Desktop Documents Downloads Music Pictures Public Templates Videos
# By default ls sorts things in alphabetical order
cd Desktop/
# Desktop is a directory in order to move into type `cd`
# cd stands for change directory in this case you are
# literally changing directories from /home/theo/ to 
# /home/theo/Desktop
# there is no output for this command you can verify its success
# by typing `pwd` the output of that should look like: 
/home/theo/Desktop # note if you are not theo, it'll be different
touch myfile.txt
# touch creates empty files you can utilize this to quickly make files
# linux does not assume file types other than .txt you'll need to specify
# you can verify the file was created by running: 
ls
# the output of ls will print all contents of the directory
# to include our new file with nothing on the Desktop by default 
# The output should be:

myfile.txt
# with touch you can create multiple files separating them with a space

touch yourfile.txt theirfile.sh
# again to verify simply list the contents of the directory with: 
ls
# the output of this should now be 
myfile.txt theirfile.sh yourfile.txt
# remember ls sorts by alphabetical order by default
rm myfile.txt
# rm is the base command for removing files, be careful as there is not
# a trash for the rm command. Anything removed with rm is gone forever 
# There is no output for this command 
# In order to verify this worked correctly we run ls
ls
# The ls output should be: 
theirfile.sh yourfile.txt
cd ../
# to go back a directory issue the above command 
# there is no output for this command, you can run `pwd` to verify
# The output should be your home directory and should look like:
/home/theo/
docker pull projectreclass/buddy-bot:latest
docker run -d -v $(pwd)/default-config.json:/app/dist/default-config.json \
-e $SLACK_TOKEN --name buddy-bot projectreclass/buddy-bot:latest
What is a command?
Understanding Linux Permissions
Download Putty
Adding your private key to Putty
Accessing the remote host
Authenticating after ssh private key has been loaded to Putty

Understanding Linux Permissions

A basic description of linux file permissions

The 3 Permission groups

By now you've been running ls -l or other commands and writing scripts, and are wondering what exactly are linux permissions. Everything in Linux is considered a file this allows everything to be managed by the same file permissions there are 3 basic groups for permissions.

People

Permissions

So there are three main entities: the Owner of the file this is typically the original creator. The Group this is typically the same primary group as the owner. And other or all this is everyone and thing on the system that is not either the owner or in the approved group. Owner and user will be used interchangeably here.

Further there are three main levels of permissions. Read which only allows a user/group to read a file. Write which allows a user/group to make changes a file. And execute which is required for users/groups to run scripts and enter directories.

Each of these permission levels also has a corresponding number. A user/group with all three permissions would have a value of 7 - the highest. And a user/group with the lowest permission would have a vaule of 1 - the lowest. This may be confusing as there are only 3 permission levels but the break down is as follows.

So you see if a user/group has read, write, and execute permissions for a file they have a vaule of 7. If they only have read and write for a file they would have a value of 6. Despite having a value of 6 a user would still not be able to execute a script or enter a directory without the execute permission level.

Changing the permission level. If the numbers confuse you, you can also set a file permission using the corresponding levels.

You can also utilize numbers to change file permissions. The correspond to three places the first is owner(user), group, then other. So a 100 permission would enable execute permission for the user, but not for the group or anyone else. Setting permissions this way will override whatever setting is previously configured if you forget to give a user/group permission you''ll have to change them. By default root always has access to everything and supercedes any and all permission settings. As root is the system owner.

Setting permissions with numbers. Truthfully, I believe this to be simpler. Please utilize the above table to follow along. Try to predict the output!

To learn more about linux permissions After all they taught me. And as always refer to the man pages they are your best friend in the terminal. Type man chmod to get the full list of possible uses and configuration.

Getting Started with Vault

How I configured vault, and the things I learned along the way

Attach An Elastic IP

First, you'll want to create and associate an Elastic IP address to the Vault server. You'll utilize this IP address to interact with the vault server as well as the GUI.

Create a A record in Route53 connecting the new Elastic IP to yournew domain name in my case it was vault.projectreclass.org

Creating and managing users

A brief description on user creation and management

Regular user creation

There are many types of users on a Linux system. Regular, system, and the all powerful super user. However, this guide is about making regular users. And giving them permissions to do things.

In order to create and mange users you'll need to use sudo or be root

To create a user named bob:

mv ~/Downloads/*.pem ~/.ssh/ # mv is move, it moves a file from one 
# location to another. In this case from Downloads to .ssh
cd ~/.ssh
chmod 600 key.pem # change the permissions to user read and write only
ssh -i key.pem [email protected] # execute the command to login

Owner (u, for user) - Left Most 3

Read

Group (g, for the group) - Middle set of 3

Write

Other (o, for everyone else) - Right most 3

Execute

Permission level

Numeric Value

Read (r)

4

Write (w)

2

Execute (x)

1

ask RedHat!
Install Vault

There will be many conflicting solutions to this especially since most guides are older and prompt you to install the zip file, this is not necessary. Simply follow the official guide for your OS/distribution.

Vault makes installation easy as long as you have a valid network connection no need to complicate it further than this.

Configure Vault

A default configuration should be autocreated for you in /etc/vault.d/vault.hcl I like to start by copying this into a config.hcl

This will maintain the original config which you can utilize as a reference later

Next, you'll update the config.hcl to have the following:

The above is the configuration for the server you'll start. Be sure to make the appropriate edits to the region, KMS KEY, and backend.

In order to access the Key Management Service and the S3 backend we've configured the Vault server will need access to both you'll configure this by attaching an IAM policy role to the server.

This guide may be useful if you've never created a role

Create A Let's Encrypt Certificate

We need a certificate to enable SSL/TLS all vault communication should happen over HTTPS not HTTP this is how we accomplish that. To start you'll need the CLI tool certbot. The following is for an Ubuntu Image follow the official guide to install certbot for your distubution.

Next we'll create the actual certificate

sudo certbot certonly --standalone -d vault.example.com

And that's it the certificate and keys you'll need will be in the following location

As you can see these are the same locations as configured in the config.hcl file

In order to create valid certs letsencrypt requires the ability to make and utilize a webserver, you'll need to enable at least port 80 in the security group for the server. In my case I enable ports: 80, 443, 8200, 22 for this configuration and removed ports 80, 443 post setup.

Make Vault A Service

Before we start and initialize vault, we'll do some future planning by making it a service.

To do so create a file in /etc/systemd/system/ and create a file named vault.service It should have the following configuration

After this file is created you'll enable and start the service

If vault.service isn't running the status option on systemctl should provide more information. If that doesn't work run vault server -config=/etc/vault.d/config.hcl to get more info on the error. In addition, you can check the logs set in the vault.service file.

At this point vault is likely locked. To unlock it you'll need to enter 3 of the 5 keys it generates. This is the default behavior for vault. To obtain these keys run:

This will generate all 5 keys as well as a root token. Safe guard these as vault never knows the root key, nor does it track the 5 keys. You'll need to unlock vault. Once you have the keys utilized to unlock vault should be stored by the AWS Key Management System (KMS)

Keep the root token for initial login!

There are two main ways to enter the keys to unlock vault. The first is via the GUI. You should be able to access this by visiting the public IP address of the host machine on port 8200 (e.g. https://127.0.0.1:8200 )

Once Vault is unlocked it should be unlocked every time you start the config with the S3 backend, this is true even when new vault servers are created as long as they refer to the same backend.

You can login with the root token you generated previously. You should now be able to create, manage, and utilize secrets.

chmod +rwx myfile # chmod or change mode is the basic command to 
# edit file permissions "+rwx" adds read, write, and execute
# But let's say we don't need the execute bit how would we change this?
chmod -x myfile # the "-" subtracts permissions from the file. 
# You can also specify the user level 
# Let's remove write permission for the group
chmod g-w myfile # the "g" refers to the group the "-w" removes write
# permission for the aforementiond group
# by default without arguments chmod makes changes to the owner (user)
chmod 777 myfile # Gives all permissions to all user levels
# However, doing this especially on all files is poor security
# let's make our permissions more restrictive. 
# output is: rwxrwxrwx <- this is the output that `ls -l` displays

chmod 700 myfile # that's better now only the user can do things. 
# But does the user need to execute the file? Is it a script? 
# If not this is unnecessary. Let's follow the rule of least privilege
# output is: rwx------ dashes mean no permission allowed
# rember this is split in threes, user, group, and other
# you can look at these permissions as: rwx,---,---

chmod 600 myfile # Perfect!
# However, what if "myfile" is actually a directory? At that it's a 
# directory we want everyone to be able to access
# ouput rw-------

chmod 711 myfile # this allows everyone to access the directory 
# but with only a the executable bit no one can read the contents
# of the directory. Let's change that
# output rwx--x--x

chmod 755 myfile # Now everyone can read and enter the directory 
# But what if we want the user and group to be able to enter, read,
# and change files, but no one else? Let's see:
# output rwxrw-rw-

chmod 770 myfile
# output rwxrwx---
sudo cp /etc/vault.d/vault.hcl /etc/vault.d/config.hcl
config.hcl
# Full configuration options can be found at https://www.vaultproject.io/docs/configuration

ui = true # Enables the web interface

mlock = true
disable_mlock = true

storage "s3" { # Preferred backend is S3
  bucket = "projectreclass-vault" # Bucket must already exist this is the name
  region = "us-east-2" # Preferred region for production
}

#storage "consul" {
#  address = "127.0.0.1:8500"
#  path    = "vault"
#}

# HTTP listener
#listener "tcp" {
#  address = "127.0.0.1:8200"
#  tls_disable = 1
#}

# HTTPS listener
listener "tcp" { # Always utilize https
  address       = "0.0.0.0:8200" # Listens on any IP on default vault port 8200
  tls_cert_file = "/etc/letsencrypt/live/vault.projectreclass.org/fullchain.pem" #We'll get into how to create these certs later
  tls_key_file  = "/etc/letsencrypt/live/vault.projectreclass.org/privkey.pem"
}

# Example AWS KMS auto unseal
seal "awskms" { # unseals vault on start up
  region = "us-east-2" # Our preferred production region
  kms_key_id = "${YOUR_KMS_KEY_ID_GOES_HERE}"
}

# Example HSM auto unseal
#seal "pkcs11" {
#  lib            = "/usr/vault/lib/libCryptoki2_64.so"
#  slot           = "0"
#  pin            = "AAAA-BBBB-CCCC-DDDD"
#  key_label      = "vault-hsm-key"
#  hmac_key_label = "vault-hsm-hmac-key"
#}
sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot
Cert: /etc/letsencrypt/live/vault.example.com/fullchain.pem
PrivKey: /etc/letsencrypt/live/vault.example.com/privkey.pem
[Unit]
Description=vault service # Name of the service
Requires=network-online.target # requires network connection
After=network-online.target 
ConditionFileNotEmpty=/etc/vault.d/config.hcl # requires config.hcl file in /etc/vault.d/

[Service]
EnvironmentFile=-/etc/sysconfig/vault
Environment=GOMAXPROCS=2
Restart=on-failure
ExecStart=vault server -config=/etc/vault.d/config.hcl # Actual vault command executed
StandardOutput=/var/log/vault-output.log
StandardError=/var/log/vault-error.log
LimitMEMLOCK=infinity
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target
sudo systemctl enable vault.service
sudo systemctl start vault.service
# To check if the service is running properly
sudo systemctl status vault.service
vault operator init

It is best practice to utilize adduser, this will create the user, their group, and their home directory. The useradd on Debian based systems is a low level tool designed for making system level users. Redhat based distros do not have this and will obfuscate the difference, and make regular users by default. TLDR; adduser not useradd.

The user is not alive but they don't have password! Therefore, won't be able to login.

To give the user a password:

passwd is the actual command not a misspelling of "password" do not try to correct this. There are many "typos" that are built-in linux commands,

Once the user is created and a password is set you'll be able to login as bob

User bob has logged in but doesn't show up in the sudoers file let's troubleshoot:

Since we are utilizing a Redhat Distribution we need to add bob to the wheel group.

If you are unsure what distribution type you're on you can run cat /etc/os-release the ID Like section will tell you the closest relative to the actual distribution.

While the actual distro is belows Amazon Linux 2 we see the ID_LIKE is "centos rhel fedora"

centos is an open source copy of Red Hat. rhel is an abbreviation for Red Hat Enterpise Linux. And Fedora is the development OS from which Red Hat inherits its updates and changes.

An example of Debian based distributions would be Ubuntu, PopOs, Raspbian.

`cat /etc/os-release`

Enter the `usermod`

Usermod is a command that allows you to change the attributes of a user, generally this utilized to change either the primary group or add groups to the user to allow them certain permissions. The most popular is adding people to the pre-verified sudo group.

The syntax for usermod is like any other built-in command: command [options] (arguments) in the case of usermod the order of arguments is groups desired to be added followed by the user.

Now that we know our distribution, sudo group, and user to be changed we can give bob the permissions he needs.

Deleting a user

We've given bob a lot of power, the same as us. If bob were to run sudo -i or sudo su - and actually become root he could do whatever he likes. Even more concerning is that bob has quit and now his user on the system needs to be removed. Leaving a user like bob who can utilize sudo is a threat to our security posture.

To clean up bob's home directory, mail directory, groups, and permissions:

Please note that not all options mean the same thing in all commands this is even more true for non-bulit-in commands. Always refer to the man pages before utilizing any options. And check the official documentation for any command line tools you've installed.

Super User

Project-Reclass/toynet-react

How to get started with toynet-react

Check out your new networking buddy https://www.toynet.projectreclass.org

Getting Started

  • Running in Development

Because toynet uses multiple services, docker-compose was introduced to help start each services and connect them on local machines. Docker compose port maps each service running (e.g. frontend and backend). The frontend application when used in a docker container normally runs on port 80, however, docker-compose maps port 3000 of the local machine to port 80 on the container. For the backend, it normally exposes port 8000 in the container, because port 8000 is not a system port, the docker-compose just maps port 8000 to port 8000 on the local machine.

This port mapping is represented in the docker-compose as

backend -> 8000 frontend -> 3000

Dependencies

The below software is required in order to get setup to run toynet-react.

  • Git

  • Docker

  • Docker-Compose

  • Node.js and NPM

Running in Development

Because there are two parts to toynet there is a docker-compose that is useful to get started developing as a fast as possible.

To start

The docker-compose file can then be run in the background using

Before starting the frontend of toynet you will need to install all the dependencies. This can be done using

After the docker-compose starts up you can start toynet-react for development using

and navate to .

Testing Pull Requests with Docker Compose

Testing pull requests can be done without cloning down or checking out the pull request on the local machine. Because toynet-react uses docker-compose pull-requests can be previewed by just using the docker-compose file.

For Linux run

On Windows (Powershell) run

Edit the docker-compose.yml file to include the GitHub PR id.

And then run

The PR app can then be previewed at

Testing the Master Branch

The master or default branch can be tested in much the same way that PR previews can be tested.

Edit the docker-compose.yml file use master or the default branch instead of a PR id.

And then run

The app can then be accessed at .

IDE Plugins

Some plugins that you might find helpful are

  • ESLint

  • React

  • VSCode Styled Components

Available Scripts

In the project directory, you can run:

  • npm run start - Runs the app in the development mode. Open to view it in the browser. The page will reload if you make edits. You will also see any lint errors in the console.

  • npm test - Launches the test runner in the interactive watch mode. See the section about for more information.

  • npm run style:fix - Fixes any automatically fixable ESLint errors.

See the section about for more information.

Learn More

You can learn more in the . To learn React, check out the .

Contributors

  • Sammy Tran

  • Yi Yang

  • Scott Richardson

sudo adduser bob
sudo passwd bob
# you'll then be prompted to enter a password 
# don't panic when nothing appears when you type 
# this is a security function of linux
sudo id bob
# this will return the following:
# uid=1001(bob) gid=1001(bob) groups=1001(bob)

# We know that we have sudo permissions so let's compare our
# permissions to bob's 

id
# this should return:
# uid=1000(ec2-user) gid=1000(ec2-user) groups=1000(ec2-user),4(adm),10(wheel),190(systemd-journal)

# Which group allows us sudo permissions? And how do we add bob? 
sudo usermod -aG wheel bob # there are two options specified here 
# option `a` stands for append, this will allow us to add a group 
# without changing bob's primary group. This way bob gets to keep 
# being bob. 
# the `G` option actually stands for group this tells `usermod` that
# we want to edit the groups of the user at the end 
# we follow the options with our arguments - first the groups to add
# and lastly the user to add the groups to
# This command has no output and can be verified by the following: 

sudo id bob
# The ouput should be: 
# uid=1001(bob) gid=1001(bob) groups=1001(bob),10(wheel)
# Now bob should be able to run sudo in the same way as us
sudo deluser -r bob # here we use the `-r` as an option. In this case 
# it means recursive. It'll delete all resources tagged for bob
# as well as the contents of his home and mail directories. 
  • npm run style:check - Checks and displays any ESLint errors.

  • npm run check-types - Checks all typescript types to ensure correct typing.

  • npm run build - Builds the app for production to the build folder. It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes. Your app is ready to be deployed!

  • Testing PRs
    Testing Master
    Available Scripts
    Recommended IDE Plugins
    Learn More
    (Install Guide)
    (Install Guide)
    (Install Guide)
    (Install Guide)
    http://localhost:3000
    http://localhost:3000
    http://localhost:3000
    http://localhost:3000
    running tests
    deployment
    Create React App documentation
    React documentation
    services:
      backend:
        ...
        ports:
          - "8000:8000"
    # or
      frontend:
        ...
        ports:
          - "3000:80"
    $ git clone https://github.com/Project-Reclass/toynet-react.git
    $ cd toynet-react
    $ docker-compose -f docker-compose.dev.yml up -d --build
    $ wget https://raw.githubusercontent.com/Project-Reclass/toynet-react/master/docker-compose.yml
    $ wget https://raw.githubusercontent.com/Project-Reclass/toynet-react/master/docker-compose.yml -Outfile docker-compose.yml
    # or
    $ Invoke-WebRequest https://raw.githubusercontent.com/Project-Reclass/toynet-react/master/docker-compose.yml -Outfile docker-compose.yml
    services:
      ...
      frontend:
        build: https://github.com/Project-Reclass/toynet-react.git#pull/{pull-request-number}/head
        # e.g. https://github.com/Project-Reclass/toynet-react.git#pull/14/head
    $ docker-compose up --build
    $ wget https://raw.githubusercontent.com/Project-Reclass/toynet-react/master/docker-compose.yml
    services:
      ...
      frontend:
        build: https://github.com/Project-Reclass/toynet-react.git#master
        # e.g. instead of https://github.com/Project-Reclass/toynet-react.git#pull/14/head
    $ docker-compose up --build