Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Welcome to the Project Reclass infrastructure documentation page! This will be the go-to resource for all things regarding Reclass infrastructure
SSL can be configured with Bitnami. This is necessary to have secured traffic on the main website projectreclass.org
Whether you've created a new wordpress site from scratch via the guide from our friend Hiro who runs awsnewbies. Or you've just made a new AMI from our existing projectreclass.org SSL is a must. It is necessary to securely send data over HTTPS rather than HTTP, not only is it best practice but it improves our visibility and our reputation
If you are migrating or standing up a new image, and are utilizing an Elastic IP the certificate will have already been created and you won't need to recreate it.
Bitnami allows us to configure SSL utilizing Let's Encrypt with their built in tool bncert-tool
. Configuring SSL is as easy as running this tool.
Step 1: Open your terminal and SSH into your server
Ensure you're in the same directory as your "my-key.pem" replace "my.server's.ip.address" with the Server's public IP address
Step 2: Run the bncert-tool
Access the tool by utilizing the absolute path
Step 3: Configure the root domain and the subdomain "www."
When prompted enter the following
Step 4: HTTP redirection to HTTPS
The next step will ask if you'd like to forward http requests to https. The answer is yes type y
and press Enter
Step 5: Domain vs Subdomain redirect
This next portion will ask if you'd like non-www to redirect to www and vice versa. You can only say yes to one of these options. The decision is yours. I like to redirect www requests to non-www domain. This will result in the following behavior.
If a user types www.projectreclass.org
the domain will redirect to projectreclass.org
If you'd like you can enable the opposite behavior during this step.
Step 6: Just say Yes!
Bitnami will then tell you it needs to restart the server for the changes to take effect it should only bring the site down for a minute or so. Once it's brought back up you'll see the SSL confirmation you're seeking. Bitnami will also ask for a email address to reference, in this case the address is: admin@projectreclass.org
Questions that many might frequently have:
The Infrastructure for project reclass is hosted on AWS. This infrastructure hosts the main website: projectreclass.org as well as the main product at: toynet.projectreclass.org
All of the instances, networking, and services are built out with Terraform the only resources not built via Terraform are the route53 dns entries - the majority of which are and must be managed manually.
While the main website is a simple wordpress site hosted on an EC2 instance and named with an A record on Route53; the toynet infrastructure is a bit more complex this diagram presents a graphical representation of the toynet infrastructure.
The above is a basic represnetation of the toynet infrastructure, it does not include smaller details such as subnets, availability zones, how access is managed, etc.
Toynet is built on top of the mininet technology. The toynet application lives in a docker container and is deployed in an EC2 instance via docker-compose.
In order to create resources on AWS you will need a vaild access key on your machine, Terraform will automatically read these keys for you as long as your enter them when you run aws configure
The above illustrates the default for an AWS configure, our default region is ohio or us-east-2 and our default output format is json.
Additionally, you'll see the last 4 characters correspond to an AWS Access Key ID
and an AWS Secret Access Key
yours will be different.
In order to obtain an AWS access key you'll need an AWS account, access keys can be managed in the IAM section of AWS. Ensure you save the file or store your access keys in a safe place as they will only be viewable simultaneously, once.
SSH credentials are managed by via AWS OpsWorks, this will allow each user to specify their public key, and utilize their own private key to access each server in a stack!
Further, ensure your user has proper Instance Access
Once the space has been enabled and added to a primary channel - this is for gitbook admin alerts - you can access these docs from any channel in the slack organization.
To do this simply run /gitbook [search query]
this will send a message to everyone in the channel with the results. You can also do this in private messages, even with yourself.
The Basics of getting started with AWS
There are two types of clouds those made mostly of water, and those made mostly of Linux servers, we'll be using the latter. In generally, "the cloud" is just a physical server hosted, paid for, and managed by another entity. Virtualization is used to split the resources a physical server to maximum use, and allows multiple entities to build their products on the same physical devices.
Giants such as Google, Microsoft, and Amazon have massive data centers which allow them to promise a higher average uptime than most smaller companies and even individuals to start utilizing their resources for little to no cost.
The cloud has little to know upfront costs, automates a lot of otherwise difficult issues, as well as offers an all in one approach making it easy to pivot to new technology or even simply pay for the services or management of something your team may not be prepared to implement themselves.
I firmly believe the cloud is the future and most organizations should move to a hybrid-cloud configuration. The cloud offers more efficiency, reliability, and optimization for the comparable price.
You can create an account and start creating virtual machines by following the .
The awscli is one of the most powerful tools that AWS has to offer and offers many options that may be unavailable or non-existent within the graphical environment. After you've created an accoutn you can create a you need to perform operations.
After which you'll need to create and grab the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
. You will add these credentials to your machine by running:
You'll also have the option to enter the region and default preferred output.
Ensure the server you're attempting to access is in the stack, your user has the public key uploaded, and that the target operating system is a
This is the official documentation page for Project Reclass. These docs can be accessed via or they can be accessed via the gitbooks slack integration. First ensure that the desired space has the integration enabled, this is in the documents section for each space and correleates to the overall space such as the or .
This guide will cover the steps I took to migrate from one AWSAccount to another specifically for Project Reclass Infrastructure.
Beginning April 2021 AWS no longer allows domain transfer between AWS accounts via GUI. Thereby, the aws CLI must be utilized. Utilize this guide to install the AWS CLI.
Configure access to AWS. Ensure the permissions allow you to make changes to Route53
Initiate the domain transfer:
Replace the above variables with the domain you wish to transfer (should be owned by your original AWS account) and the account ID of the new account (the account you want to take ownership of the domain)
The above should output something similar to the following:
{
"OperationId": "string",
"Password": "string"
}
You'll need the above password to accept the domain.
First you'll need to run aws configure
again in order to switch to the accepting account
Next you must accept the transfer
Your password may include a quote character, it is recommended to wrap the password string in single quotes to properly pass it to AWS
After the domain transfer has been accepted, you'll need to create a new hosted zone or import the old hosted zone into the new account.
It is important to note that transferring a domain does not break any existing DNS records, this is due to the nameservers still being owned and operated by the original hosted zone, and by extension Route53. AWS explicitly tells us that the domain adn the DNS records do not need to be owned by the same account for routing to occur. While importing the hosted zone is likely the easiest way to migrate the domain, you may want to migrate and create records at your own pace. If this is the case remember you'll need to create a new hosted zone for the domain, remember to update the NS for the domain under Route53 -> Domains -> ${DOMAIN-NAME}
in the graphical environment.
In order to migrate the website in it's current state, create an AMI of the EC2 instance in which it is hosted, make this AMI private.
To create the AMI go to the EC2 list select the server you'd like to make an image from and go to Actions -> images and templates -> create image
Fill out the AMI image name and descritpion, enable no reboot if uptime is a must and then click "create image":
You'll be able to find a list of all AMI images on the sidebar under Images -> AMI
Finally select your AMI go to the permissions, and allow access to the new AWS Account by entering the new Account ID. Then you'll be able to access and launch the image. It'll be under the same sidebar Images -> AMI
and under the "Private" filter
If your Image requires a subscription in our case bitnami/wordpress that must be accepted before the image can be launched.
Once the domain and image have been properly trasnferred add an A record pointing the domain projectreclass.org -> ${ip.ip.ip.ip}
to the new IP address of the newly launched server
It is recommended to first attach an Elastic IP with the ability to reassociate and attach it to the server and point it the domain at that. This will allow you to quickly reassociate the IP without having to change records in the event you want to change the endpoint of projectreclass.org. Refer to this guide to learn how to attach an EIP
Assuming everything has gone well thus far, migrating toynet should be simple. Toynet is deployed automatically utilizing Terraform code.
Ensure your aws configure is set to the new account
Terraform code will also create the DNS records
It is advised that you check the cost of this infrastructure prior terraform apply
by doing the following: terraform plan -out=plan.tfplan && terraform show -json plan.tfplan > plan.json
And uploading the plan.json to the terraform cost estimator
Instructions on using our most important tool
Buddy bot is a lightweight tool written in Typescript that reminds Reclass members to clock in and out with style.
Like all our greatest tools, buddy bot runs in a docker container and can be pulled from our repo:
First you'll need to mount a file to buddy bot you can use our default config by running wget
on the following: https://raw.githubusercontent.com/Project-Reclass/buddy-bot-slack/master/default-config.json
From here you can mount the file to image:
The above runs a new container with docker run
The -d
runs the container in the background
The -v
flag sets the volume option to be mounted. This creates a directory but can be a directory or file
Afterwards we specify the file location to mounted, and a place to mount it within the container $(pwd)/default-config.json:/app/dist/default-config.json
Then, we add our slack token so it can talk to the channels specified the -e $SLACK_TOKEN
creates an environment variable within the container
Next, we name the container buddy_bot with --name buddy_bot
Finally, this command requires an image for an argument in this case it can be the image ID or the image and tag. We use the image name and tag for simplcity: projectreclass/buddy-bot:latest
Basic description of command structures
A command is tool in the terminal that typically accepts options and arguments. The syntax is generally command [options] (arguments)
many commands can be run with or without arguments and most can be run without options in this instance we'll utilize ls
as our example.
ls
lists files and directories within the current directory it has many options and is a good practice command as it won't break anything. If you want to list the contents of a directory that you are not it use an argument that refers to the desired directory for example from the /home/theo
directory I can list the contents of Desktop by running ls Desktop/
To find out more about this command or any other utiliize man. From your terminal run man ls
How to create your first hello world program in Go!
Go is a powerful language that powers the infrastructure that Project Reclass runs on. Docker, Kubernetes, Terraform, Vault, and even our chatbots are written in Go. Having even a basic understanding of the language can assist in making important infrastructure decisions, and interacting with the tools we utilize. Go also has a very large, friendly and diverse community.
First you'll need to install Golang, you can do so here. Go is often also found in most package managers so you can install it using the default manager for your system.
Follow the below instructions to properly finish setup on your Linux System
Package managers may handle the path setting for you. Check this with echo $PATH
Create a file named main.go and add the following:
Save your code and run go run main.go
There are several solid resources for go the docs page being a great goto
Other tools I like are gobyexample and Learn Go with Tests
Finally, check out the Go Playground and Play with Go
How to get started with toynet-react
Check out your new networking buddy https://www.toynet.projectreclass.org
Because toynet
uses multiple services, docker-compose
was introduced to help start each services and connect them on local machines. Docker compose port maps each service running (e.g. frontend and backend). The frontend application when used in a docker container normally runs on port 80, however, docker-compose maps port 3000 of the local machine to port 80 on the container. For the backend, it normally exposes port 8000 in the container, because port 8000 is not a system port, the docker-compose just maps port 8000 to port 8000 on the local machine.
This port mapping is represented in the docker-compose as
backend -> 8000 frontend -> 3000
The below software is required in order to get setup to run toynet-react
.
Git (Install Guide)
Docker (Install Guide)
Docker-Compose (Install Guide)
Node.js and NPM (Install Guide)
Because there are two parts to toynet
there is a docker-compose that is useful to get started developing as a fast as possible.
To start
The docker-compose file can then be run in the background using
Before starting the frontend of toynet
you will need to install all the dependencies. This can be done using
After the docker-compose starts up you can start toynet-react
for development using
and navate to http://localhost:3000.
Testing pull requests can be done without cloning down or checking out the pull request on the local machine. Because toynet-react
uses docker-compose pull-requests can be previewed by just using the docker-compose file.
For Linux run
On Windows (Powershell) run
Edit the docker-compose.yml
file to include the GitHub PR id.
And then run
The PR app can then be previewed at http://localhost:3000
The master or default branch can be tested in much the same way that PR previews can be tested.
Edit the docker-compose.yml
file use master or the default branch instead of a PR id.
And then run
The app can then be accessed at http://localhost:3000.
Some plugins that you might find helpful are
ESLint
React
VSCode Styled Components
In the project directory, you can run:
npm run start
- Runs the app in the development mode. Open http://localhost:3000 to view it in the browser. The page will reload if you make edits. You will also see any lint errors in the console.
npm test
- Launches the test runner in the interactive watch mode. See the section about running tests for more information.
npm run style:fix
- Fixes any automatically fixable ESLint errors.
npm run style:check
- Checks and displays any ESLint errors.
npm run check-types
- Checks all typescript types to ensure correct typing.
npm run build
- Builds the app for production to the build
folder. It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes. Your app is ready to be deployed!
See the section about deployment for more information.
You can learn more in the Create React App documentation. To learn React, check out the React documentation.
Sammy Tran
Yi Yang
Scott Richardson
How I configured vault, and the things I learned along the way
First, you'll want to create and associate an Elastic IP address to the Vault server. You'll utilize this IP address to interact with the vault server as well as the GUI.
Create a A record in Route53 connecting the new Elastic IP to yournew domain name in my case it was vault.projectreclass.org
There will be many conflicting solutions to this especially since most guides are older and prompt you to install the zip file, this is not necessary. Simply follow the official guide for your OS/distribution.
Vault makes installation easy as long as you have a valid network connection no need to complicate it further than this.
A default configuration should be autocreated for you in /etc/vault.d/vault.hcl
I like to start by copying this into a config.hcl
This will maintain the original config which you can utilize as a reference later
Next, you'll update the config.hcl
to have the following:
The above is the configuration for the server you'll start. Be sure to make the appropriate edits to the region, KMS KEY, and backend.
In order to access the Key Management Service and the S3 backend we've configured the Vault server will need access to both you'll configure this by attaching an IAM policy role to the server.
This guide may be useful if you've never created a role
We need a certificate to enable SSL/TLS all vault communication should happen over HTTPS not HTTP this is how we accomplish that. To start you'll need the CLI tool certbot. The following is for an Ubuntu Image follow the official guide to install certbot for your distubution.
Next we'll create the actual certificate
sudo certbot certonly --standalone -d vault.example.com
And that's it the certificate and keys you'll need will be in the following location
As you can see these are the same locations as configured in the config.hcl
file
In order to create valid certs letsencrypt requires the ability to make and utilize a webserver, you'll need to enable at least port 80 in the security group for the server. In my case I enable ports: 80, 443, 8200, 22
for this configuration and removed ports 80, 443
post setup.
Before we start and initialize vault, we'll do some future planning by making it a service.
To do so create a file in /etc/systemd/system/
and create a file named vault.service
It should have the following configuration
After this file is created you'll enable and start the service
If vault.service isn't running the status option on systemctl should provide more information. If that doesn't work run vault server -config=/etc/vault.d/config.hcl
to get more info on the error. In addition, you can check the logs set in the vault.service file.
At this point vault is likely locked. To unlock it you'll need to enter 3 of the 5 keys it generates. This is the default behavior for vault. To obtain these keys run:
This will generate all 5 keys as well as a root token. Safe guard these as vault never knows the root key, nor does it track the 5 keys. You'll need to unlock vault. Once you have the keys utilized to unlock vault should be stored by the AWS Key Management System (KMS)
Keep the root token for initial login!
There are two main ways to enter the keys to unlock vault. The first is via the GUI. You should be able to access this by visiting the public IP address of the host machine on port 8200 (e.g. https://127.0.0.1:8200
)
Once Vault is unlocked it should be unlocked every time you start the config with the S3 backend, this is true even when new vault servers are created as long as they refer to the same backend.
You can login with the root token you generated previously. You should now be able to create, manage, and utilize secrets.
How to configure AWS Ops Works
Ensure any Instance you create is on the approved list for AWS Ops Works
Ensure your IAM user has the proper policy configuration to register an instance. This can be a preexisting instance. I performed the registration via command line so I needed the permission: AWSOpsWorksRegisterCLI_EC2
Next, I ran the registration command:
When you register an instance through the AWS Ops Works console, a command will be created for you that will require some editing.
I actually removed the use-instance-profile
option to allow AWS Ops Works to create a new user for registration.
Once your instance is registered ensure your IAM user has been uploaded to the stack in the user section:
From this page you can also edit your user, change the permissions to allow SSH access, as well as sudo permissions if necessary. Finally, you can upload your own public key to this user!
You can change your public key as you wish, this makes it easy to quickly gain access and create new keys if necessary.
Administrators should ensure proper offboarding to include deleting users not currently requiring access to AWS Ops Works. Access keys should also have lifetimes, and offboarded users should have their access keys and accounts deleted
If you want to register your instance with the use-instance-profile
argument in the command above, you must create an instance profile as an IAM role and assign it to both the instance you are registering and the stack. (You can add it to a stack that already exists by editing its settings, or include this information in your new stack under "Default IAM Instance Profile".)
Ensure that the IAM role has permission to register an instance in Ops Works, and is the same for the instance and the stack. AWS Ops Works will use this profile for registration instead of creating a new one.
What is docker? For that matter what is a container? Find out here.
Docker is a container runtime, ultimately run on containerd. Docker is a packaged and more user friendly version with a graphical interface as well as a command line interface suite. Although it has many challengers and potential usurpers, docker remains the go-to tool for containers.
An oversimplification is that a container can be thought of as a much smaller VM. In general VM's require virtualization at the hardware level, hosts an entire OS complete with it's own kernel. A container on the other hand can share resources with it's hosts, leverages user-space a bit better, and can often run only what is necessary for the application. Ultimately, a container is more portable, and consistent than a VM and helps to eliminate the "it worked on my machine" issue.
First you'll need to install docker to ensure docker has installed properly you can run your first container with.
This will take you through the official guide for docker, and truthfully it's a great tool! It should be accessible via localhost in your browser.
Vim is an in terminal text editor. The improved version of vi. One of these variants is often preinstalled on every linux host.
Vim allows you edit files, it'll also create any files that don't exist and open them for editing. Typing vim [filename]
is the same as typing touch [filename]
followed by vim [filename]
you still open an empty file for editing. Let's start by creating and editing a file.
We'll be writing a simple script using vim. Vim has two basic modes - a command mode and an Insert mode. Insert mode allows us to write code and make changes to a file. While command mode allows us to change things about our vim session, issue Unix and Linux commands as well as save and or quit the vim session.
There's also a visual mode but we won't be using that here.
Now that we're in our file we need to enter insert mode
this is simple within vim but not intuitive. Simply press i
. This will change us from command mode
to insert mode
we'll type two things: our shebang line to inform our OS how to execute our script #!/bin/bash
and our code underneath it echo "Hello World"
So now that we've written our first script. How do we save it? You'll first need to enter command mode
to do this press the [escape]
key on your keyboard. If you do not have one for some reason you can hold the [control] + [
key at the same time.
Once you've entered command mode you won't be able to edit text in the same way. There are several keybindings so don't start pressing random keys.
Once you've entered command mode
you'll need to type the colon key :
this should show up at the bottom of the editor.
If you do not see the colon continue attempting to enter the command mode and enter the colon until it appears
Once the colon is present you can type wq
to save your changes and exit vim.
A basic description of linux file permissions
By now you've been running ls -l
or other commands and writing scripts, and are wondering what exactly are linux permissions. Everything in Linux is considered a file this allows everything to be managed by the same file permissions there are 3 basic groups for permissions.
So there are three main entities: the Owner
of the file this is typically the original creator. The Group
this is typically the same primary group as the owner. And other
or all this is everyone and thing on the system that is not either the owner or in the approved group. Owner and user will be used interchangeably here.
Further there are three main levels of permissions. Read
which only allows a user/group to read a file. Write
which allows a user/group to make changes a file. And execute
which is required for users/groups to run scripts and enter directories.
Each of these permission levels also has a corresponding number. A user/group with all three permissions would have a value of 7 - the highest. And a user/group with the lowest permission would have a vaule of 1 - the lowest. This may be confusing as there are only 3 permission levels but the break down is as follows.
So you see if a user/group has read, write, and execute permissions for a file they have a vaule of 7. If they only have read and write for a file they would have a value of 6. Despite having a value of 6 a user would still not be able to execute a script or enter a directory without the execute
permission level.
Changing the permission level. If the numbers confuse you, you can also set a file permission using the corresponding levels.
You can also utilize numbers to change file permissions. The correspond to three places the first is owner(user), group, then other. So a 100 permission would enable execute permission for the user, but not for the group or anyone else. Setting permissions this way will override whatever setting is previously configured if you forget to give a user/group permission you''ll have to change them. By default root always has access to everything and supercedes any and all permission settings. As root is the system owner.
Setting permissions with numbers. Truthfully, I believe this to be simpler. Please utilize the above table to follow along. Try to predict the output!
We'll break down the components of our first script. If you do not have an editor please refer to the getting started with vim guide below
First open your favorite text editor and create file named hello.sh
Save the file and exit your text editor.
In order to run scripts they need to be executable, this means the permissions of the file need to be change. Every file can have read, write, and execute permissions. By default files are not executable on creation.
The following commands can be copy and pasted but will only work if you are in the directory in which the hello.sh
script exists. Utilize the ls
command to check if the script is in your current directory and the cd
command to change to a new one.
We'll update our file permission to make the file executable:
Once the script has been made executable it can be run
If you do not recieve the expected output double check the directory, and the permissions of the file, the permissions for the file should be as such.
What we're looking for in this case is the x
on the left side of the permissions. For more info on permissions refer to the permissions page below.
A brief description on user creation and management
There are many types of users on a Linux system. Regular, system, and the all powerful super user. However, this guide is about making regular users. And giving them permissions to do things.
In order to create and mange users you'll need to use sudo
or be root
To create a user named bob:
It is best practice to utilize adduser, this will create the user, their group, and their home directory. The useradd on Debian based systems is a low level tool designed for making system level users. Redhat based distros do not have this and will obfuscate the difference, and make regular users by default. TLDR; adduser not useradd.
The user is not alive but they don't have password! Therefore, won't be able to login.
To give the user a password:
passwd
is the actual command not a misspelling of "password" do not try to correct this. There are many "typos" that are built-in linux commands,
Once the user is created and a password is set you'll be able to login as bob
User bob has logged in but doesn't show up in the sudoers file let's troubleshoot:
Since we are utilizing a Redhat Distribution we need to add bob to the wheel
group.
If you are unsure what distribution type you're on you can run cat /etc/os-release
the ID Like section will tell you the closest relative to the actual distribution.
While the actual distro is belows Amazon Linux 2 we see the ID_LIKE
is "centos rhel fedora"
centos is an open source copy of Red Hat. rhel is an abbreviation for Red Hat Enterpise Linux. And Fedora is the development OS from which Red Hat inherits its updates and changes.
An example of Debian based distributions would be Ubuntu, PopOs, Raspbian.
Usermod is a command that allows you to change the attributes of a user, generally this utilized to change either the primary group or add groups to the user to allow them certain permissions. The most popular is adding people to the pre-verified sudo group.
The syntax for usermod is like any other built-in command: command [options] (arguments)
in the case of usermod
the order of arguments is groups desired to be added followed by the user.
Now that we know our distribution, sudo group, and user to be changed we can give bob the permissions he needs.
We've given bob a lot of power, the same as us. If bob were to run sudo -i
or sudo su -
and actually become root he could do whatever he likes. Even more concerning is that bob has quit and now his user on the system needs to be removed. Leaving a user like bob who can utilize sudo is a threat to our security posture.
To clean up bob's home directory, mail directory, groups, and permissions:
Please note that not all options mean the same thing in all commands this is even more true for non-bulit-in commands. Always refer to the man pages before utilizing any options. And check the official documentation for any command line tools you've installed.
Basic commands necessary for utilizing a linux terminal
The filesystem is the configuration of your system, in linux everything is a file, this makes it easier to interact with all resources on your system, everything will have permissions, and be able to be interacted with in the same basic ways. In order to successfully utilize the terminal you'll need to understand how to traverse the filesystem.
Begin by opening a terminal. The application should already be installed on your machine. The first thing you'll see is the following:
Normally you'll be a regular user this is indicated by the $
occasionally you'll need to become a super user (called root) - you can think of this as the system's admin account. This user has no limits so you must be careful when you become root.
The #
sign in front of text such as this indicates a comment.
Who are you?
How to see where you are:
What is in here?
How to change locations
How to create a file:
How to remove a file:
How to go back a directory:
rm may or may not prompt you for comfirmation to ensure you want to permananently delete the file, this is a nice feature but isn't the default eveyrwhere, so ensure you're in the right directory with pwd
and use ls
to ensure it's the proper file.
A brief description of the super user known as root
In Linux the super user or administrator account is called root. By default when configuring a system the default user will have the ability to become root.
In the linux terminal the dollar sign $
demonstrates a regular user
Whereas the hash#
illustrates the root user
the #
also denotes a comment in shell scripting so try not to be confused
There are multiple ways to become or imitate root.
You can become root by running sudo su -
or sudo -i
However, because the root account has not limitations; critical issues can occur from simple typos as such it is preferred to imitate or impersonate root.
The preferred method is to utilize sudo
before your commands to run them with root priviliges as needed. Avoid becoming root whenever possible instead do something like:
If you're attempting to utilize sudo
but are running into an error that says something along the lines of :user is not in the sudoers file this incident will be reported
then this indicates that your user is not in the proper group to have sudo permissions. Please note that you may alter the sudoers file utilizing visudo to add your user but that is not advised.
Instead you'll need to become root by typing su -
and entering root's password - not your user's password.
Once you have successfully authenticated and become root you'll need to add your user to the proper group:
To learn more about linux permissions After all they taught me. And as always refer to the man pages they are your best friend in the terminal. Type man chmod
to get the full list of possible uses and configuration.
People
Permissions
Owner (u, for user) - Left Most 3
Read
Group (g, for the group) - Middle set of 3
Write
Other (o, for everyone else) - Right most 3
Execute
Permission level
Numeric Value
Read (r)
4
Write (w)
2
Execute (x)
1
This will be a brief overview on how to create resources in AWS with Terraform
Configure your AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
with the following
Refer to how to access AWS on notes for getting an Access Key and Secret
This example terraform code should give you an idea of how to create resources in this case we'll be making an EC2 Instance
main.tf
is the main file terraform runs by default. Create the main.tf
file fill it with your Terraform code. Afterwards run: terraform init
followed by terraform apply
to create your resources.
SSH is secure shell it is an industry standard for accessing remote systems. SSH works by comparing the requestor's private key with the server's known public key. And is preferred over user/pass auth
First open a terminal. You'll need to know or posess the credentials for a user on the remote machine you're attempting to reach
Use the IP address to avoid any potential DNS issues. Replace the 127.0.0.1
with the IP address of the target remote machine.
The default user on various AWS instances are as follows:
Ubuntu Image default user - ubuntu
Amazon-Linux Image default user - ec2-user
Wordpress with Bitnami (Website server) - bitnami
Within AWS you'll need an ssh key. You can create and download this via the AWS GUI when launching an instance. When you download your .pem file it'll show up in your downloads folder it is recommended you move this to the .ssh directory and updating the permissions. If the permissions aren't updated you won't be able utilize the private key.
SSH will not use keys that are too permissive. It must be read/write by the owner only at a maximum.
Replace the key.pem
file to the name of the .pem file you downloaded
Replace the 127.0.0.1
to the ip of the target machine.
You should only need to utilize putty in the absence of a terminal. Terminals are built in to all operating systems except Windows. Putty is specifically for ssh from a windows client. All other OS (i.e. linux, bsd, macOS) should follow the instructions above
Within AWS you'll need an ssh key. You can create and download this via the AWS GUI when launching an instance. When you download your .pem file it'll show up in your downloads folder it is recommended you move this to the .ssh directory and updating the permissions. If the permissions aren't updated you won't be able utilize the private key.
You'll also want to grab PuTTyGen as well, AWS will provide you with a .pem file. In order to utilize PuTTy for SSH you'll need to convert this file to a private key.
Open PuTTyGen upload your .pem file from AWS and split the key into a public/private key pair. Store this someone easy to remember, but secure as you'll add the file path to Putty.
Once you've downloaded and launched Putty. Go to the sidebar and expand the SSH and Auth options. You'll then click browse and navigate the location of your newly created privatekey
.ppk file.
Afterwards navigate back to the Session tab. You'll be able to enter the IP address of the server under the "Host Name (or IP address) section as well as the SSH port under the "Port" section. For us it'll be the default of 22. Ensure your connection type is SSH
It is recommended that before clicking open you press save to save these settings, you'll also have the option to enter a name for these settings inside of the "Saved Sessions" box
Click open to launch the server and enter the username associated to your key.
The default user on various AWS instances are as follows:
Ubuntu Image default user - ubuntu
Amazon-Linux Image default user - ec2-user
Wordpress with Bitnami (Website server) - bitnami