Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Welcome to the Project Reclass infrastructure documentation page! This will be the go-to resource for all things regarding Reclass infrastructure
The Basics of getting started with AWS
There are two types of clouds those made mostly of water, and those made mostly of Linux servers, we'll be using the latter. In generally, "the cloud" is just a physical server hosted, paid for, and managed by another entity. Virtualization is used to split the resources a physical server to maximum use, and allows multiple entities to build their products on the same physical devices.
Giants such as Google, Microsoft, and Amazon have massive data centers which allow them to promise a higher average uptime than most smaller companies and even individuals to start utilizing their resources for little to no cost.
The cloud has little to know upfront costs, automates a lot of otherwise difficult issues, as well as offers an all in one approach making it easy to pivot to new technology or even simply pay for the services or management of something your team may not be prepared to implement themselves.
I firmly believe the cloud is the future and most organizations should move to a hybrid-cloud configuration. The cloud offers more efficiency, reliability, and optimization for the comparable price.
You can create an account and start creating virtual machines by following the .
The awscli is one of the most powerful tools that AWS has to offer and offers many options that may be unavailable or non-existent within the graphical environment. After you've created an accoutn you can create a you need to perform operations.
After which you'll need to create and grab the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . You will add these credentials to your machine by running:
You'll also have the option to enter the region and default preferred output.
Questions that many might frequently have:
The Infrastructure for project reclass is hosted on AWS. This infrastructure hosts the main website: projectreclass.org as well as the main product at: toynet.projectreclass.org
All of the instances, networking, and services are built out with Terraform the only resources not built via Terraform are the route53 dns entries - the majority of which are and must be managed manually.
While the main website is a simple wordpress site hosted on an EC2 instance and named with an A record on Route53; the toynet infrastructure is a bit more complex this diagram presents a graphical representation of the toynet infrastructure.
The above is a basic represnetation of the toynet infrastructure, it does not include smaller details such as subnets, availability zones, how access is managed, etc.
Toynet is built on top of the mininet technology. The toynet application lives in a docker container and is deployed in an EC2 instance via docker-compose.
In order to create resources on AWS you will need a vaild access key on your machine, Terraform will automatically read these keys for you as long as your enter them when you run aws configure
The above illustrates the default for an AWS configure, our default region is ohio or us-east-2 and our default output format is json.
Additionally, you'll see the last 4 characters correspond to an AWS Access Key ID and an AWS Secret Access Key yours will be different.
In order to obtain an AWS access key you'll need an AWS account, access keys can be managed in the IAM section of AWS. Ensure you save the file or store your access keys in a safe place as they will only be viewable simultaneously, once.
SSH credentials are managed by via AWS OpsWorks, this will allow each user to specify their public key, and utilize their own private key to access each server in a stack!
Ensure the server you're attempting to access is in the stack, your user has the public key uploaded, and that the target operating system is a
Further, ensure your user has proper Instance Access
This is the official documentation page for Project Reclass. These docs can be accessed via or they can be accessed via the gitbooks slack integration. First ensure that the desired space has the integration enabled, this is in the documents section for each space and correleates to the overall space such as the or .
Once the space has been enabled and added to a primary channel - this is for gitbook admin alerts - you can access these docs from any channel in the slack organization.
To do this simply run /gitbook [search query] this will send a message to everyone in the channel with the results. You can also do this in private messages, even with yourself.
aws configureThis guide will cover the steps I took to migrate from one AWSAccount to another specifically for Project Reclass Infrastructure.
Beginning April 2021 AWS no longer allows domain transfer between AWS accounts via GUI. Thereby, the aws CLI must be utilized. Utilize this guide to install the AWS CLI.
Configure access to AWS. Ensure the permissions allow you to make changes to Route53
Initiate the domain transfer:
First you'll need to run aws configure again in order to switch to the accepting account
Next you must accept the transfer
After the domain transfer has been accepted, you'll need to create a new hosted zone or .
It is important to note that transferring a domain does not break any existing DNS records, this is due to the nameservers still being owned and operated by the original hosted zone, and by extension Route53. AWS explicitly tells us that the domain adn the DNS records do not need to be owned by the same account for routing to occur. While importing the hosted zone is likely the easiest way to migrate the domain, you may want to migrate and create records at your own pace. If this is the case remember you'll need to create a new hosted zone for the domain, remember to update the NS for the domain under Route53 -> Domains -> ${DOMAIN-NAME} in the graphical environment.
In order to migrate the website in it's current state, create an AMI of the EC2 instance in which it is hosted, make this AMI private.
To create the AMI go to the EC2 list select the server you'd like to make an image from and go to Actions -> images and templates -> create image
Fill out the AMI image name and descritpion, enable no reboot if uptime is a must and then click "create image":
You'll be able to find a list of all AMI images on the sidebar under Images -> AMI
Finally select your AMI go to the permissions, and allow access to the new AWS Account by entering the new Account ID. Then you'll be able to access and launch the image. It'll be under the same sidebar Images -> AMI and under the "Private" filter
Once the domain and image have been properly trasnferred add an A record pointing the domain projectreclass.org -> ${ip.ip.ip.ip} to the new IP address of the newly launched server
Assuming everything has gone well thus far, migrating toynet should be simple. Toynet is deployed automatically utilizing Terraform code.
$ aws route53domains transfer-domain-to-another-aws-account --domain-name projectreclass.org --account-id ${YOUR-NEW-ACCOUNT-ID}$ aws route53domains accept-domain-transfer-from-another-aws-account \
--domain-name projectreclass.org \
--password ${password} github clone https://github.com/Project-Reclass/infrastructure.git
cd ./infrastructure/terraform/toynet/production
terraform init
terraform apply toynet-deployment-production.tfHow to configure AWS Ops Works
Ensure your IAM user has the proper policy configuration to register an instance. This can be a preexisting instance. I performed the registration via command line so I needed the permission: AWSOpsWorksRegisterCLI_EC2
Next, I ran the registration command:
Once your instance is registered ensure your IAM user has been uploaded to the stack in the user section:
From this page you can also edit your user, change the permissions to allow SSH access, as well as sudo permissions if necessary. Finally, you can upload your own public key to this user!
If you want to register your instance with the use-instance-profile argument in the command above, you must create an as an IAM role and assign it to both the instance you are registering and the stack. (You can add it to a stack that already exists by editing its settings, or include this information in your new stack under "Default IAM Instance Profile".)
Ensure that the IAM role has permission to register an instance in Ops Works, and is the same for the instance and the stack. AWS Ops Works will use this profile for registration instead of creating a new one.
What is docker? For that matter what is a container? Find out here.
Docker is a container runtime, ultimately run on containerd. Docker is a packaged and more user friendly version with a graphical interface as well as a command line interface suite. Although it has many challengers and potential usurpers, docker remains the go-to tool for containers.
Basic description of command structures
A command is tool in the terminal that typically accepts options and arguments. The syntax is generally command [options] (arguments) many commands can be run with or without arguments and most can be run without options in this instance we'll utilize ls as our example.
ls
# without any arguments this list all files in the current directory
# output will look like the following:
directory1 directory2 file1 file2
# ls can be run with options one example is the all option
# all can be run as such
`ls --all` OR `ls -a`
# Options typically have long and short forms. The short form is called
# with a single `-` while the long form is called with a double `--`
# without an argument this will still list items in the current directory
# however this option will also show hidden files
# the ouput will be similar to the following:
.bashrc .hiddendirectory .hiddenfile .ssh directory1 directory2 file1 file2
# you can also call ls with both options and an argument let's look into
# our directory1 directory
ls -a directory1
# this is an empty directory so the only results will be the directory
# itself `.` and the reference to the directory before it `..`
./ ../ This example terraform code should give you an idea of how to create resources in this case we'll be making an EC2 Instance
aws configure# Set your provider and region in this case AWS and Ohio
provider "aws" {
region = "us-east-2"
}
resource "aws_instance" "my-first-instance" { # resource first then custom name
ami = "123gd4df5678a" # image ID is necessary to tell AWS what to build
instance_type = "t2.nano" # this is the size of the VM. This size is free tier
key_name = "My-first-key" # this will tell AWS which key to use, it must already exist
}
# It's important for readability to have equal signs in a resource align
# Not doing so may cause issues in running this code
# Save and exit this file and run `terraform init` followed by `terraform apply main.tf`
An oversimplification is that a container can be thought of as a much smaller VM. In general VM's require virtualization at the hardware level, hosts an entire OS complete with it's own kernel. A container on the other hand can share resources with it's hosts, leverages user-space a bit better, and can often run only what is necessary for the application. Ultimately, a container is more portable, and consistent than a VM and helps to eliminate the "it worked on my machine" issue.
First you'll need to install docker to ensure docker has installed properly you can run your first container with.
This will take you through the official guide for docker, and truthfully it's a great tool! It should be accessible via localhost in your browser.
docker run -d -p 80:80 docker/getting-startedThe following commands can be copy and pasted but will only work if you are in the directory in which the hello.sh script exists. Utilize the ls command to check if the script is in your current directory and the cd command to change to a new one.
We'll update our file permission to make the file executable:
Once the script has been made executable it can be run
If you do not recieve the expected output double check the directory, and the permissions of the file, the permissions for the file should be as such.
What we're looking for in this case is the x on the left side of the permissions. For more info on permissions refer to the permissions page below.
#!/bin/bash # Every bash script needs to start with a shebang line
# like the one above this tells the OS what shell to use
echo "Hello World" # The echo command prints to stdout
# Meaning it'll print whatever follows it to the terminal
How to create your first hello world program in Go!
Go is a powerful language that powers the infrastructure that Project Reclass runs on. Docker, Kubernetes, Terraform, Vault, and even our chatbots are written in Go. Having even a basic understanding of the language can assist in making important infrastructure decisions, and interacting with the tools we utilize. Go also has a very large, friendly and diverse community.
chmod +x hello.sh
# chmod stands for change mode the +x option adds execute permission
# and our argument is the script we created "hello.sh"./hello.sh
# scripts are run by referring to the file
# the `./` means from the current directory that is the starting point
# if the hello.sh script is in the current directory it will run
# the output of the script should be "Hello World"$ aws opsworks register --use-instance-profile \
--infrastructure-class ec2 \
--region us-east-2 \
--stack-id a08f26f4-4362-4f34-9d57-71492e210e43 \
--ssh-username [username] \
--ssh-private-key [private-key-path] i-0aa4b421d6fe86cb8First you'll need to install Golang, you can do so here. Go is often also found in most package managers so you can install it using the default manager for your system.
Follow the below instructions to properly finish setup on your Linux System
Create a file named main.go and add the following:
Save your code and run go run main.go
There are several solid resources for go the docs page being a great goto
Other tools I like are gobyexample and Learn Go with Tests
Finally, check out the Go Playground and Play with Go
rm -rf /usr/local/go && tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz
# Remove any previous versions of go and unzip the new version
export PATH=$PATH:/usr/local/go/bin
# Add Go to your $PATH
go version
# You should be able to run this from anyway and get the most recent version// this is a comment
package main // You'll always need a package name this is usually main
import (
"fmt" // "fmt" is the format package and contains the Println func
)
func main() { // You'll need a main function to run your code
fmt.Println("Hello, Project Reclass") //Println prints to stdout and formats
}


SSL can be configured with Bitnami. This is necessary to have secured traffic on the main website projectreclass.org
Whether you've created a new wordpress site from scratch via the guide from our friend Hiro who runs awsnewbies. Or you've just made a new AMI from our existing projectreclass.org SSL is a must. It is necessary to securely send data over HTTPS rather than HTTP, not only is it best practice but it improves our visibility and our reputation
Bitnami allows us to configure SSL utilizing Let's Encrypt with their built in tool bncert-tool. Configuring SSL is as easy as running this tool.
Step 1: Open your terminal and SSH into your server
Step 2: Run the bncert-tool
Access the tool by utilizing the absolute path
Step 3: Configure the root domain and the subdomain "www."
When prompted enter the following
Step 4: HTTP redirection to HTTPS
The next step will ask if you'd like to forward http requests to https. The answer is yes type y and press Enter
Step 5: Domain vs Subdomain redirect
This next portion will ask if you'd like non-www to redirect to www and vice versa. You can only say yes to one of these options. The decision is yours. I like to redirect www requests to non-www domain. This will result in the following behavior.
If a user types www.projectreclass.org the domain will redirect to projectreclass.org If you'd like you can enable the opposite behavior during this step.
Step 6: Just say Yes!
Bitnami will then tell you it needs to restart the server for the changes to take effect it should only bring the site down for a minute or so. Once it's brought back up you'll see the SSL confirmation you're seeking. Bitnami will also ask for a email address to reference, in this case the address is: [email protected]
Vim is an in terminal text editor. The improved version of vi. One of these variants is often preinstalled on every linux host.
sudo ssh -i my-key.pem [email protected]'s.ip.addressWhereas the hash# illustrates the root user
the # also denotes a comment in shell scripting so try not to be confused
There are multiple ways to become or imitate root.
You can become root by running sudo su - or sudo -i
However, because the root account has not limitations; critical issues can occur from simple typos as such it is preferred to imitate or impersonate root.
The preferred method is to utilize sudo before your commands to run them with root priviliges as needed. Avoid becoming root whenever possible instead do something like:
If you're attempting to utilize sudo but are running into an error that says something along the lines of :user is not in the sudoers file this incident will be reported then this indicates that your user is not in the proper group to have sudo permissions. Please note that you may alter the sudoers file utilizing visudo to add your user but that is not advised.
Instead you'll need to become root by typing su - and entering root's password - not your user's password.
Once you have successfully authenticated and become root you'll need to add your user to the proper group:
sudo /opt/bitnami/bncert-toolprojectreclass.org www.projectreclass.org$ sudo touch files.txt # preferred method
# please avoid doing the below:
$ sudo su -
# <- root user icon
# touch files.txt
^ the above is an unnecessary use of the root account. # To add your user to the proper group become root and run the following:
# For Redhat based distros
usermod -aG wheel $USER
# For Debian based distros
usermod -aG sudo $USER
# where "$USER" is your username. Afterwards type exit to return to $USERWe'll be writing a simple script using vim. Vim has two basic modes - a command mode and an Insert mode. Insert mode allows us to write code and make changes to a file. While command mode allows us to change things about our vim session, issue Unix and Linux commands as well as save and or quit the vim session.
There's also a visual mode but we won't be using that here.
Now that we're in our file we need to enter insert mode this is simple within vim but not intuitive. Simply press i . This will change us from command mode to insert mode we'll type two things: our shebang line to inform our OS how to execute our script #!/bin/bash and our code underneath it echo "Hello World"
So now that we've written our first script. How do we save it? You'll first need to enter command mode to do this press the [escape] key on your keyboard. If you do not have one for some reason you can hold the [control] + [ key at the same time.
Once you've entered command mode you won't be able to edit text in the same way. There are several keybindings so don't start pressing random keys.
Once you've entered command mode you'll need to type the colon key : this should show up at the bottom of the editor.
Once the colon is present you can type wq to save your changes and exit vim.
vim hello.sh
Basic commands necessary for utilizing a linux terminal
The filesystem is the configuration of your system, in linux everything is a file, this makes it easier to interact with all resources on your system, everything will have permissions, and be able to be interacted with in the same basic ways. In order to successfully utilize the terminal you'll need to understand how to traverse the filesystem.
Begin by opening a terminal. The application should already be installed on your machine. The first thing you'll see is the following:
$ # The dollar sign indicates that you are a regular userWho are you?
How to see where you are:
What is in here?
How to change locations
How to create a file:
How to remove a file:
How to go back a directory:
SSH is secure shell it is an industry standard for accessing remote systems. SSH works by comparing the requestor's private key with the server's known public key. And is preferred over user/pass auth
First open a terminal. You'll need to know or posess the credentials for a user on the remote machine you're attempting to reach
Within AWS you'll need an ssh key. You can create and download this via the AWS GUI when launching an instance. When you download your .pem file it'll show up in your downloads folder it is recommended you move this to the .ssh directory and updating the permissions. If the permissions aren't updated you won't be able utilize the private key.
You should only need to utilize putty in the absence of a terminal. Terminals are built in to all operating systems except Windows. Putty is specifically for ssh from a windows client. All other OS (i.e. linux, bsd, macOS) should follow the instructions above
Within AWS you'll need an ssh key. You can create and download this via the AWS GUI when launching an instance. When you download your .pem file it'll show up in your downloads folder it is recommended you move this to the .ssh directory and updating the permissions. If the permissions aren't updated you won't be able utilize the private key.
You'll also want to grab PuTTyGen as well, AWS will provide you with a .pem file. In order to utilize PuTTy for SSH you'll need to convert this file to a private key.
Open PuTTyGen upload your .pem file from AWS and split the key into a public/private key pair. Store this someone easy to remember, but secure as you'll add the file path to Putty.
Once you've downloaded and launched Putty. Go to the sidebar and expand the SSH and Auth options. You'll then click browse and navigate the location of your newly created privatekey.ppk file.
Afterwards navigate back to the Session tab. You'll be able to enter the IP address of the server under the "Host Name (or IP address) section as well as the SSH port under the "Port" section. For us it'll be the default of 22. Ensure your connection type is SSH
Click open to launch the server and enter the username associated to your key.
Instructions on using our most important tool
Buddy bot is a lightweight tool written in Typescript that reminds Reclass members to clock in and out with style.
Like all our greatest tools, buddy bot runs in a docker container and can be pulled from our repo:
First you'll need to mount a file to buddy bot you can use our default config by running wget on the following:
From here you can mount the file to image:
#!/bin/bash # This is a shebang it tells our OS how to execute our script
echo "Hello World" # echo outputs data to stdout aka prints to the shell
# Now that we know we're a regular user wwe should find out who that is
# In order to understand who you are simply type: `whoami`
# The output should be the user you're logged in as in my case its:
theo# this shows you your current location
# `pwd` stands for "print working directory"
# The output should be your home directory and should look like:
/home/theo/Then, we add our slack token so it can talk to the channels specified the -e $SLACK_TOKEN creates an environment variable within the container
Next, we name the container buddy_bot with --name buddy_bot
Finally, this command requires an image for an argument in this case it can be the image ID or the image and tag. We use the image name and tag for simplcity: projectreclass/buddy-bot:latest
# ls stands for list and shows you everything in the current directory
# the output will look something like this:
Desktop Documents Downloads Music Pictures Public Templates Videos
# By default ls sorts things in alphabetical order# Desktop is a directory in order to move into type `cd`
# cd stands for change directory in this case you are
# literally changing directories from /home/theo/ to
# /home/theo/Desktop
# there is no output for this command you can verify its success
# by typing `pwd` the output of that should look like:
/home/theo/Desktop # note if you are not theo, it'll be different# touch creates empty files you can utilize this to quickly make files
# linux does not assume file types other than .txt you'll need to specify
# you can verify the file was created by running:
ls
# the output of ls will print all contents of the directory
# to include our new file with nothing on the Desktop by default
# The output should be:
myfile.txt
# with touch you can create multiple files separating them with a space
touch yourfile.txt theirfile.sh
# again to verify simply list the contents of the directory with:
ls
# the output of this should now be
myfile.txt theirfile.sh yourfile.txt
# remember ls sorts by alphabetical order by default# rm is the base command for removing files, be careful as there is not
# a trash for the rm command. Anything removed with rm is gone forever
# There is no output for this command
# In order to verify this worked correctly we run ls
ls
# The ls output should be:
theirfile.sh yourfile.txt# to go back a directory issue the above command
# there is no output for this command, you can run `pwd` to verify
# The output should be your home directory and should look like:
/home/theo/docker pull projectreclass/buddy-bot:latestdocker run -d -v $(pwd)/default-config.json:/app/dist/default-config.json \
-e $SLACK_TOKEN --name buddy-bot projectreclass/buddy-bot:latest


A basic description of linux file permissions
By now you've been running ls -l or other commands and writing scripts, and are wondering what exactly are linux permissions. Everything in Linux is considered a file this allows everything to be managed by the same file permissions there are 3 basic groups for permissions.
People
Permissions
So there are three main entities: the Owner of the file this is typically the original creator. The Group this is typically the same primary group as the owner. And other or all this is everyone and thing on the system that is not either the owner or in the approved group. Owner and user will be used interchangeably here.
Further there are three main levels of permissions. Read which only allows a user/group to read a file. Write which allows a user/group to make changes a file. And execute which is required for users/groups to run scripts and enter directories.
Each of these permission levels also has a corresponding number. A user/group with all three permissions would have a value of 7 - the highest. And a user/group with the lowest permission would have a vaule of 1 - the lowest. This may be confusing as there are only 3 permission levels but the break down is as follows.
So you see if a user/group has read, write, and execute permissions for a file they have a vaule of 7. If they only have read and write for a file they would have a value of 6. Despite having a value of 6 a user would still not be able to execute a script or enter a directory without the execute permission level.
Changing the permission level. If the numbers confuse you, you can also set a file permission using the corresponding levels.
Setting permissions with numbers. Truthfully, I believe this to be simpler. Please utilize the above table to follow along. Try to predict the output!
To learn more about linux permissions After all they taught me. And as always refer to the man pages they are your best friend in the terminal. Type man chmod to get the full list of possible uses and configuration.
How I configured vault, and the things I learned along the way
First, you'll want to create and associate an Elastic IP address to the Vault server. You'll utilize this IP address to interact with the vault server as well as the GUI.
Create a A record in Route53 connecting the new Elastic IP to yournew domain name in my case it was vault.projectreclass.org
A brief description on user creation and management
There are many types of users on a Linux system. Regular, system, and the all powerful super user. However, this guide is about making regular users. And giving them permissions to do things.
In order to create and mange users you'll need to use sudo or be root
To create a user named bob:
mv ~/Downloads/*.pem ~/.ssh/ # mv is move, it moves a file from one
# location to another. In this case from Downloads to .ssh
cd ~/.ssh
chmod 600 key.pem # change the permissions to user read and write only
ssh -i key.pem [email protected] # execute the command to login


Owner (u, for user) - Left Most 3
Read
Group (g, for the group) - Middle set of 3
Write
Other (o, for everyone else) - Right most 3
Execute
Permission level
Numeric Value
Read (r)
4
Write (w)
2
Execute (x)
1
There will be many conflicting solutions to this especially since most guides are older and prompt you to install the zip file, this is not necessary. Simply follow the official guide for your OS/distribution.
Vault makes installation easy as long as you have a valid network connection no need to complicate it further than this.
A default configuration should be autocreated for you in /etc/vault.d/vault.hcl I like to start by copying this into a config.hcl
Next, you'll update the config.hcl to have the following:
We need a certificate to enable SSL/TLS all vault communication should happen over HTTPS not HTTP this is how we accomplish that. To start you'll need the CLI tool certbot. The following is for an Ubuntu Image follow the official guide to install certbot for your distubution.
Next we'll create the actual certificate
sudo certbot certonly --standalone -d vault.example.com
And that's it the certificate and keys you'll need will be in the following location
As you can see these are the same locations as configured in the config.hcl file
Before we start and initialize vault, we'll do some future planning by making it a service.
To do so create a file in /etc/systemd/system/ and create a file named vault.service It should have the following configuration
After this file is created you'll enable and start the service
At this point vault is likely locked. To unlock it you'll need to enter 3 of the 5 keys it generates. This is the default behavior for vault. To obtain these keys run:
There are two main ways to enter the keys to unlock vault. The first is via the GUI. You should be able to access this by visiting the public IP address of the host machine on port 8200 (e.g. https://127.0.0.1:8200 )
Once Vault is unlocked it should be unlocked every time you start the config with the S3 backend, this is true even when new vault servers are created as long as they refer to the same backend.
You can login with the root token you generated previously. You should now be able to create, manage, and utilize secrets.
chmod +rwx myfile # chmod or change mode is the basic command to
# edit file permissions "+rwx" adds read, write, and execute
# But let's say we don't need the execute bit how would we change this?
chmod -x myfile # the "-" subtracts permissions from the file.
# You can also specify the user level
# Let's remove write permission for the group
chmod g-w myfile # the "g" refers to the group the "-w" removes write
# permission for the aforementiond group
# by default without arguments chmod makes changes to the owner (user)chmod 777 myfile # Gives all permissions to all user levels
# However, doing this especially on all files is poor security
# let's make our permissions more restrictive.
# output is: rwxrwxrwx <- this is the output that `ls -l` displays
chmod 700 myfile # that's better now only the user can do things.
# But does the user need to execute the file? Is it a script?
# If not this is unnecessary. Let's follow the rule of least privilege
# output is: rwx------ dashes mean no permission allowed
# rember this is split in threes, user, group, and other
# you can look at these permissions as: rwx,---,---
chmod 600 myfile # Perfect!
# However, what if "myfile" is actually a directory? At that it's a
# directory we want everyone to be able to access
# ouput rw-------
chmod 711 myfile # this allows everyone to access the directory
# but with only a the executable bit no one can read the contents
# of the directory. Let's change that
# output rwx--x--x
chmod 755 myfile # Now everyone can read and enter the directory
# But what if we want the user and group to be able to enter, read,
# and change files, but no one else? Let's see:
# output rwxrw-rw-
chmod 770 myfile
# output rwxrwx---sudo cp /etc/vault.d/vault.hcl /etc/vault.d/config.hcl# Full configuration options can be found at https://www.vaultproject.io/docs/configuration
ui = true # Enables the web interface
mlock = true
disable_mlock = true
storage "s3" { # Preferred backend is S3
bucket = "projectreclass-vault" # Bucket must already exist this is the name
region = "us-east-2" # Preferred region for production
}
#storage "consul" {
# address = "127.0.0.1:8500"
# path = "vault"
#}
# HTTP listener
#listener "tcp" {
# address = "127.0.0.1:8200"
# tls_disable = 1
#}
# HTTPS listener
listener "tcp" { # Always utilize https
address = "0.0.0.0:8200" # Listens on any IP on default vault port 8200
tls_cert_file = "/etc/letsencrypt/live/vault.projectreclass.org/fullchain.pem" #We'll get into how to create these certs later
tls_key_file = "/etc/letsencrypt/live/vault.projectreclass.org/privkey.pem"
}
# Example AWS KMS auto unseal
seal "awskms" { # unseals vault on start up
region = "us-east-2" # Our preferred production region
kms_key_id = "${YOUR_KMS_KEY_ID_GOES_HERE}"
}
# Example HSM auto unseal
#seal "pkcs11" {
# lib = "/usr/vault/lib/libCryptoki2_64.so"
# slot = "0"
# pin = "AAAA-BBBB-CCCC-DDDD"
# key_label = "vault-hsm-key"
# hmac_key_label = "vault-hsm-hmac-key"
#}sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbotCert: /etc/letsencrypt/live/vault.example.com/fullchain.pem
PrivKey: /etc/letsencrypt/live/vault.example.com/privkey.pem[Unit]
Description=vault service # Name of the service
Requires=network-online.target # requires network connection
After=network-online.target
ConditionFileNotEmpty=/etc/vault.d/config.hcl # requires config.hcl file in /etc/vault.d/
[Service]
EnvironmentFile=-/etc/sysconfig/vault
Environment=GOMAXPROCS=2
Restart=on-failure
ExecStart=vault server -config=/etc/vault.d/config.hcl # Actual vault command executed
StandardOutput=/var/log/vault-output.log
StandardError=/var/log/vault-error.log
LimitMEMLOCK=infinity
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGTERM
[Install]
WantedBy=multi-user.targetsudo systemctl enable vault.service
sudo systemctl start vault.service
# To check if the service is running properly
sudo systemctl status vault.servicevault operator initIt is best practice to utilize adduser, this will create the user, their group, and their home directory. The useradd on Debian based systems is a low level tool designed for making system level users. Redhat based distros do not have this and will obfuscate the difference, and make regular users by default. TLDR; adduser not useradd.
The user is not alive but they don't have password! Therefore, won't be able to login.
To give the user a password:
Once the user is created and a password is set you'll be able to login as bob
User bob has logged in but doesn't show up in the sudoers file let's troubleshoot:
Since we are utilizing a Redhat Distribution we need to add bob to the wheel group.
Usermod is a command that allows you to change the attributes of a user, generally this utilized to change either the primary group or add groups to the user to allow them certain permissions. The most popular is adding people to the pre-verified sudo group.
The syntax for usermod is like any other built-in command: command [options] (arguments) in the case of usermod the order of arguments is groups desired to be added followed by the user.
Now that we know our distribution, sudo group, and user to be changed we can give bob the permissions he needs.
We've given bob a lot of power, the same as us. If bob were to run sudo -i or sudo su - and actually become root he could do whatever he likes. Even more concerning is that bob has quit and now his user on the system needs to be removed. Leaving a user like bob who can utilize sudo is a threat to our security posture.
To clean up bob's home directory, mail directory, groups, and permissions:
How to get started with toynet-react
Check out your new networking buddy https://www.toynet.projectreclass.org
Because toynet uses multiple services, docker-compose was introduced to help start each services and connect them on local machines. Docker compose port maps each service running (e.g. frontend and backend). The frontend application when used in a docker container normally runs on port 80, however, docker-compose maps port 3000 of the local machine to port 80 on the container. For the backend, it normally exposes port 8000 in the container, because port 8000 is not a system port, the docker-compose just maps port 8000 to port 8000 on the local machine.
This port mapping is represented in the docker-compose as
backend -> 8000 frontend -> 3000
The below software is required in order to get setup to run toynet-react.
Git
Docker
Docker-Compose
Node.js and NPM
Because there are two parts to toynet there is a docker-compose that is useful to get started developing as a fast as possible.
To start
The docker-compose file can then be run in the background using
Before starting the frontend of toynet you will need to install all the dependencies. This can be done using
After the docker-compose starts up you can start toynet-react for development using
and navate to .
Testing pull requests can be done without cloning down or checking out the pull request on the local machine. Because toynet-react uses docker-compose pull-requests can be previewed by just using the docker-compose file.
For Linux run
On Windows (Powershell) run
Edit the docker-compose.yml file to include the GitHub PR id.
And then run
The PR app can then be previewed at
The master or default branch can be tested in much the same way that PR previews can be tested.
Edit the docker-compose.yml file use master or the default branch instead of a PR id.
And then run
The app can then be accessed at .
Some plugins that you might find helpful are
ESLint
React
VSCode Styled Components
In the project directory, you can run:
npm run start - Runs the app in the development mode. Open to view it in the browser. The page will reload if you make edits. You will also see any lint errors in the console.
npm test - Launches the test runner in the interactive watch mode. See the section about for more information.
npm run style:fix - Fixes any automatically fixable ESLint errors.
See the section about for more information.
You can learn more in the . To learn React, check out the .
Sammy Tran
Yi Yang
Scott Richardson
sudo adduser bobsudo passwd bob
# you'll then be prompted to enter a password
# don't panic when nothing appears when you type
# this is a security function of linuxsudo id bob
# this will return the following:
# uid=1001(bob) gid=1001(bob) groups=1001(bob)
# We know that we have sudo permissions so let's compare our
# permissions to bob's
id
# this should return:
# uid=1000(ec2-user) gid=1000(ec2-user) groups=1000(ec2-user),4(adm),10(wheel),190(systemd-journal)
# Which group allows us sudo permissions? And how do we add bob? sudo usermod -aG wheel bob # there are two options specified here
# option `a` stands for append, this will allow us to add a group
# without changing bob's primary group. This way bob gets to keep
# being bob.
# the `G` option actually stands for group this tells `usermod` that
# we want to edit the groups of the user at the end
# we follow the options with our arguments - first the groups to add
# and lastly the user to add the groups to
# This command has no output and can be verified by the following:
sudo id bob
# The ouput should be:
# uid=1001(bob) gid=1001(bob) groups=1001(bob),10(wheel)
# Now bob should be able to run sudo in the same way as ussudo deluser -r bob # here we use the `-r` as an option. In this case
# it means recursive. It'll delete all resources tagged for bob
# as well as the contents of his home and mail directories. npm run style:check - Checks and displays any ESLint errors.
npm run check-types - Checks all typescript types to ensure correct typing.
npm run build - Builds the app for production to the build folder. It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes. Your app is ready to be deployed!
services:
backend:
...
ports:
- "8000:8000"
# or
frontend:
...
ports:
- "3000:80"$ git clone https://github.com/Project-Reclass/toynet-react.git
$ cd toynet-react$ docker-compose -f docker-compose.dev.yml up -d --build$ wget https://raw.githubusercontent.com/Project-Reclass/toynet-react/master/docker-compose.yml$ wget https://raw.githubusercontent.com/Project-Reclass/toynet-react/master/docker-compose.yml -Outfile docker-compose.yml
# or
$ Invoke-WebRequest https://raw.githubusercontent.com/Project-Reclass/toynet-react/master/docker-compose.yml -Outfile docker-compose.ymlservices:
...
frontend:
build: https://github.com/Project-Reclass/toynet-react.git#pull/{pull-request-number}/head
# e.g. https://github.com/Project-Reclass/toynet-react.git#pull/14/head$ docker-compose up --build$ wget https://raw.githubusercontent.com/Project-Reclass/toynet-react/master/docker-compose.ymlservices:
...
frontend:
build: https://github.com/Project-Reclass/toynet-react.git#master
# e.g. instead of https://github.com/Project-Reclass/toynet-react.git#pull/14/head$ docker-compose up --build



