반응형
블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

카테고리


반응형


EC2 - Summary & Exam TIps


From Cloud Guru lecture in udemy






* Know the differences (pricing models) between (***)

- On Demand 

- Spot

- Reserved

- Dedicated Hosts : 


==> Choose best pricing model for specific requests


* Remember with spot instances;

- If you terminate the instance, you pay for the hour

- if AWS terminates the spot instance, you get the hour it was terminated in for free.



* EC2 Instance Types


Making Sense of AWS EC2 Instance Type Pricing: ECU Vs. vCPU





EBS (Elastic Block Store) Consists of;

- SSD, General Purpose - GP2 (Up to 10,000 IOPS)

- SSD, Provisioned IOPS - I01 (More than 10,000 IOPS)

- HDD, THroughput Optimized - ST1 - frequently accessed workloads

- HDD, Cold - SC1 - less frequently accessed data.

- HDD, Magnetic - Standard - cheap, infrequently accessed storage


* You cannot mount 1 EBS volume to multiple EC2 instances, instead use EFS.



EC2 Lab Exam Tips

* Termination Protection is turned off by default, you must turn it on

* On an EBS-backed instance, the default action is for the root EBS volume to be deleted when the instance is terminated

* Root volumnes cannot be encrypted by default, you need a third party tool (such as bit locker etc.) to encrypt the root volume.

* Additional volumes can be encrypted.


Volumes vs. Snapshots

* Volumes exist on EBS

- Virtual Hard Disk

* Snapshots exist on S3

* You can take a snapshot of a volume, this will store that volume on S3

* Snapshots are point in time copies of Volumes

* Snapshots are incremental, this means that only the blocks that have changed since your last snapshot are moved to S3

* If this is your first snapshot, it may take some time to create


Volumes vs. Snapshots - Security

* Snapshots of encrypted volumes are encrypted automatically

* Volumes restored from encrypted snapshots are encrypted automatically

* You can share snapshots, but only if they are unencrypted.

  - These snapshots can be shared with other AWS accounts or made public


Snapshots of Root Device Volume

* To create a snapshot for Amazon EBS volumes that serve as root devices, you should stop the instance before taking the snapshot.




EBS vs. Instance Store 

* Instance Store Volumes are sometimes called Ephemeral Storage.

* Instance store volumes cannot be stopped. If the underlying host fails, you will lose your data.

* EBS backed instances can be stopped. You will not lose the data on this instance if it is stopped.

* You can reboot both, you will not lose your data.

* By default, both ROOT volumes will be deleted on termination, however with EBS volumes, you can tell AWS to keep the root device volume.


How can I take a snapshot of a RAID Array?

* Problem - Take a snapshot, the snapshot excludes data held in the cache by applications and the OS. This tends not to matter on a single volume, however using multiple volumes in a RAID array, this can be a problem due to interdependencies of the array.


* Solution - Take an application consistent snapshot

- Stop the application from writing to disk

- Flush all chaches to the disk.


- How can we do this?

  Freeze the file system

  Unmount the RAID Array

  Shutting down the associated EC2 instance.

  


Amazon Machine Images 

* AMI's are regional. You can only launch an AMI from the region in which it is stored. However you can copy AMI's to other regions using the console, command line or the Amazon EC2 API.


* Standard Monitoring = 5 Minutes

* Detailed Monitoring = 1 Minute


* CloudWatch is for performance monitoring

* CloudTrail is for auditing


What can I do with Cloudwatch?

* Dashboards - Creates awesome dashboards to see what is happening with your AWS environment

* Alarms - Allows you to set Alarms that notify you when particular thresholds are hit.

* Events - CloudWatch Events helps you to respond to state changes in your AWS resources.

* Logs - CloudWatch Logs helps you to aggregate, monitor, and store logs.


Roles Lab

* Roles are more secure than storing your access key and secret access key on individual EC2 instances.

* Roles are easier to manage

* Roles can be assigned to an EC2 instance AFTER it has been provisioned using both the command line and the AWS console.

* Roles are universal, you can use them in any region.


Instance Meta-data

* Used to get information about an instance (such as public ip)

* curl http://169.254.169.254/latest/meta-data/

* No such thing as user-data for an instance


EFS Features

* Supports the Network File System version 4 (NFSv4) protocol

* You only pay for the storage you use (no pre-provisioning required)

* Can scale up to the petabytes

* Can support thousands of concurrent NFS connections

* Data is stored across multiple AZ's within a region

* Read After Write consistency


What is Lambda?

* AWS Lambda is a compute service where you can upload your code and create a Lambda function. AWS Lambda takes care of provisioning and managing the servers that you use to run the code. You don't have to worry about operating systems, patching, scaling, etc. You can use Lambda in the following ways.


- As an event-driven compute service where AWS Lambda runs your code in response to events. These events could be changes to data in an Amazon S3 bucket or an Amazon DynamoDB table.

- As a compute service to run your code in response to HTTP requests using Amazon API Gateway or API calls made using AWS SDKs. This is what we use at A Cloud Guru





Quiz

- The default region for an SDK is "US-EAST-1"

- AWS SDK supports Python, Ruby, Node.JS, PHO, JAVA (not C++)

- HTTP 5XX is a server side error

- HTTP 4XX is a client side error

- HTTP 3XX is a redirection

- HTTP 2XX is the request was successful

- To find out both private IP address and public IP address of EC2 instance

  => Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/

- To retrieve instance metadata or userdata you will need to use this IP address

  => http://169.254.169.254

- In order to enable encryption at rest using EC2 and Elastic Block Store you need to

  => Configure encryption when creating the EBS volume

 http://aws.amazon.com/about-aws/whats-new/2014/05/21/Amazon-EBS-encryption-now-available/

- You can have multiple SSL certificates on an Elastic Load Balancer

- Elastic Load Balancers are chargeable

반응형


반응형

* Elastic Load Balancer (Exam Tips)






Elastic Load Balancer FAQs

Classic Load Balancer

General

Application Load Balancer

Network Load Balancer






1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@34.228.166.148 -i EC2KeyPair.pem.txt 

Last login: Mon Oct 16 23:10:59 2017 from 208.185.161.249


       __|  __|_  )

       _|  (     /   Amazon Linux AMI

      ___|\___|___|


https://aws.amazon.com/amazon-linux-ami/2017.03-release-notes/

13 package(s) needed for security, out of 33 available

Run "sudo yum update" to apply all updates.

Amazon Linux version 2017.09 is available.

[ec2-user@ip-172-31-24-42 ~]$ sudo su

[root@ip-172-31-24-42 ec2-user]# service httpd status

httpd (pid  8282) is running...

[root@ip-172-31-24-42 ec2-user]# service httpd start

Starting httpd: 

[root@ip-172-31-24-42 ec2-user]# chkconfig httpd on

[root@ip-172-31-24-42 ec2-user]# service httpd status

httpd (pid  8282) is running...

[root@ip-172-31-24-42 ec2-user]# cd /var/www/html

[root@ip-172-31-24-42 html]# ls -l

total 4

-rw-r--r-- 1 root root 185 Sep 10 23:38 index.html

[root@ip-172-31-24-42 html]# nano healthcheck.html


[root@ip-172-31-24-42 html]# ls -l

total 8

-rw-r--r-- 1 root root  28 Oct 16 23:14 healthcheck.html

-rw-r--r-- 1 root root 185 Sep 10 23:38 index.html

[root@ip-172-31-24-42 html]# 


Go to EC2 in aws.amazon.com and click on Load Balancers in left panel.



Classic Load Balancer


Next -> Select Security Group, Configure Security Settings (Next) -> 




Add Tag -> Review -> Close ->





(Edit Instance if there is no State)


- Create Application Load Balancer

-> Almost same as Classic Load Balancer.



-> Provisioning state will be turn to active after a couple of mins.


- Application Load Balancer : Preferred for HTTP/HTTPS (****)

- Classic Load Balancer (*****)


1 subnet = 1 availability zone


- Instances monitored by ELB are reported as ;

  InService, or OutofService

  

- Health Checks check the instance health by talking to it

- Have their own DNS name. You are never given an IP address.

- Read the ELB FAQ for Classic Load Balancers ***

- Delete Load Balancers after complete the test. (It would be paid service)






* SDK's - Exam Tips


https://aws.amazon.com/tools/


- Android, iOS, JavaScript (Browser)

- Java

- .NET

- Node.js

- PHP

- Python

- Ruby

- Go

- C++


Default Region - US-EAST-1

Some have default regions (JAVA)

Some do not (Node.js)





* Lambda (*** - Several questions)


Data Center - IAAS - PAAS - Containers - Serverless


The best way to get started with AWS Lambda is to work through the Getting Started Guide, part of our technical documentation. Within a few minutes, you will be able to deploy and use a AWS Lambda function.



What is Lambda?

- Data Centers

- Hardware

- Assembly Code/Protocols

- High Level Languages

- Operating System

- Application Layer/AWS APIs

- AWS Lambda 


AWS Lambda is a compute service where you can upload your code and create a Lambda function. AWS Lambda takes care of provisioning and managing the servers that you use to run the code. You don't have to worry about operating systems, patching, scaling, etc. You can use Lambda in the following ways.


- As an event-driven compute service where AWS Lambda runs your code in response to events. These events could be changes to data in an Amazon S3 bucket or an Amazon DynamoDB table.

- As a compute service to run our code in response to HTTP requests using Amazon API Gateway or API calls made using AWS SDKs. 


How to user Lambda -> refer to my articles for Alexa Skill development

http://coronasdk.tistory.com/931



What Languages?

Node.js

Java

Python

C#




How is Lambda Priced?

- Number of requests

   First 1 million requests are free. $0.20 per 1 million requests thereafter.


- Duration

  Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms. The price depends on the amount of memory you allocate to your function. You are charged $0.00001667 for every GB-second used.


Why is Lambda cool?

- No Servers!

- Continuous Scaling

- Super cheap!


Lambda - Exam Tips


Lambda scales out (not up) automatically

Lambda functions are independent, 1 event = 1 function

Lambda is serverless

Know what services are serverless! (S3, API Gateway, Lambda Dynamo DB etc. ) -EC2 is not serverless.-

Lambda functions can trigger other lambda functions, 1 event can = x functions if functions trigger other functions

Architectures can get extremely complicated, AWS X-ray allows you to debug what is happening

Lambda can do things globally, you can use it to back up S3 buckets to other S3 buckets etc.

Know your triggers

(Duration time is Maximum 5 mins)



반응형


반응형

* Bash Script


Auto execute scripts when create instance

- Enter script in Advanced Details text box when you create an instance



In this case, system will execute all these scripts when create the instance.

#!/bin/bash

yum update -y

yum install httpd -y

service httpd start

checkconfig httpd on

cd /var/www/html

aws s3 cp s3://mywebsitebucket-changsoo/index.html /var/www/html


: Update system, install Apache, start httpd server and copy index.html from s3 to /var/www/html folder of the Instance



* Install PHP and create php page


Enter scripts below to Advanced Details when you create an instance


#!/bin/bash

yum update -y

yum install httpd24 php56 git -y

service httpd start

chkconfig httpd on

cd /var/www/html

echo "<?php phpinfo();?>" > test.php

git clone https://github.com/acloudguru/s3


navigate to 'public IP address'/test.php in your browser then you can see PHP INFO page




Access to the server through Terminal


1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@54.89.219.112 -i EC2KeyPair.pem.txt 


.....................


[root@ip-172-31-80-161 ec2-user]# cd /var/www/html

[root@ip-172-31-80-161 html]# ls -l

total 8

drwxr-xr-x 3 root root 4096 Oct 12 23:52 s3

-rw-r--r-- 1 root root   19 Oct 12 23:52 test.php

[root@ip-172-31-80-161 html]# 


==> there is a test.php and downloaded s3 folder from cloudguru's GitHub repository



https://docs.aws.amazon.com/aws-sdk-php/v3/guide/getting-started/installation.html



Installing via Composer

Using Composer is the recommended way to install the AWS SDK for PHP. Composer is a dependency management tool for PHP that allows you to declare the dependencies your project needs and installs them into your project.

  1. Install Composer

    curl -sS https://getcomposer.org/installer | php
    
  2. Run the Composer command to install the latest stable version of the SDK:

    php composer.phar require aws/aws-sdk-php
    
  3. Require Composer's autoloader:

    <?php
    require 'vendor/autoload.php';
    

You can find out more on how to install Composer, configure autoloading, and other best-practices for defining dependencies at getcomposer.org.


[root@ip-172-31-80-161 html]# pwd

/var/www/html

[root@ip-172-31-80-161 html]# curl -sS https://getcomposer.org/installer | php

curl: (35) Network file descriptor is not connected

[root@ip-172-31-80-161 html]# curl -sS https://getcomposer.org/installer | php

All settings correct for using Composer

Downloading...


Composer (version 1.5.2) successfully installed to: /var/www/html/composer.phar

Use it: php composer.phar


[root@ip-172-31-80-161 html]# php composer.phar require aws/aws-sdk-php

Do not run Composer as root/super user! See https://getcomposer.org/root for details

Using version ^3.36 for aws/aws-sdk-php

./composer.json has been created

Loading composer repositories with package information

Updating dependencies (including require-dev)

Package operations: 6 installs, 0 updates, 0 removals

  - Installing mtdowling/jmespath.php (2.4.0): Downloading (100%)         

  - Installing psr/http-message (1.0.1): Downloading (100%)         

  - Installing guzzlehttp/psr7 (1.4.2): Downloading (100%)         

  - Installing guzzlehttp/promises (v1.3.1): Downloading (100%)         

  - Installing guzzlehttp/guzzle (6.3.0): Downloading (100%)         

  - Installing aws/aws-sdk-php (3.36.26): Downloading (100%)         

guzzlehttp/guzzle suggests installing psr/log (Required for using the Log middleware)

aws/aws-sdk-php suggests installing aws/aws-php-sns-message-validator (To validate incoming SNS notifications)

aws/aws-sdk-php suggests installing doctrine/cache (To use the DoctrineCacheAdapter)

Writing lock file

Generating autoload files

[root@ip-172-31-80-161 html]# ls -l

total 1844

-rw-r--r-- 1 root root      62 Oct 13 00:04 composer.json

-rw-r--r-- 1 root root   12973 Oct 13 00:04 composer.lock

-rwxr-xr-x 1 root root 1852323 Oct 13 00:04 composer.phar

drwxr-xr-x 3 root root    4096 Oct 12 23:52 s3

-rw-r--r-- 1 root root      19 Oct 12 23:52 test.php

drwxr-xr-x 8 root root    4096 Oct 13 00:04 vendor

[root@ip-172-31-80-161 html]# cd vendor

[root@ip-172-31-80-161 vendor]# ls -l

total 28

-rw-r--r-- 1 root root  178 Oct 13 00:04 autoload.php

drwxr-xr-x 3 root root 4096 Oct 13 00:04 aws

drwxr-xr-x 2 root root 4096 Oct 13 00:04 bin

drwxr-xr-x 2 root root 4096 Oct 13 00:04 composer

drwxr-xr-x 5 root root 4096 Oct 13 00:04 guzzlehttp

drwxr-xr-x 3 root root 4096 Oct 13 00:04 mtdowling

drwxr-xr-x 3 root root 4096 Oct 13 00:04 psr

[root@ip-172-31-80-161 vendor]# vi autoload.php


<?php


// autoload.php @generated by Composer


require_once __DIR__ . '/composer/autoload_real.php';


return ComposerAutoloaderInit818e4cd87569a511144599b49f2b1fed::getLoader();






* Using the PHP to access to S3



[root@ip-172-31-80-161 s3]# ls -l

total 24

-rw-r--r-- 1 root root 796 Oct 12 23:52 cleanup.php

-rw-r--r-- 1 root root 195 Oct 12 23:52 connecttoaws.php

-rw-r--r-- 1 root root 666 Oct 12 23:52 createbucket.php

-rw-r--r-- 1 root root 993 Oct 12 23:52 createfile.php

-rw-r--r-- 1 root root 735 Oct 12 23:52 readfile.php

-rw-r--r-- 1 root root 193 Oct 12 23:52 README.md

[root@ip-172-31-80-161 s3]# vi createbucket.php 


<?php

//copyright 2015 - A Cloud Guru.


//connection string

include 'connecttoaws.php';


// Create a unique bucket name

$bucket = uniqid("acloudguru", true);


// Create our bucket using our unique bucket name

$result = $client->createBucket(array(

    'Bucket' => $bucket

));


//HTML to Create our webpage

echo "<h1 align=\"center\">Hello Cloud Guru!</h1>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<h2 align=\"center\">You have successfully created a bucket called {$bucket}</h2>";

echo "<div align=\"center\"><a href=\"createfile.php?bucket=$bucket\">Click Here to Continue</a></div>";

?>


[root@ip-172-31-80-161 s3]# vi connecttoaws.php 


<?php

// Include the SDK using the Composer autoloader

require '/var/www/html/vendor/autoload.php';

$client = new Aws\S3\S3Client([

    'version' => 'latest',

    'region'  => 'us-east-1'

]);

?>


[root@ip-172-31-80-161 s3]# vi createfile.php 


<?php

//Copyright 2015 A Cloud Guru


//Connection string

include 'connecttoaws.php';


/*

Files in Amazon S3 are called "objects" and are stored in buckets. A specific

object is referred to by its key (or name) and holds data. In this file

we create an object called acloudguru.txt that contains the data

'Hello Cloud Gurus!'

and we upload/put it into our newly created bucket.

*/


//get the bucket name

$bucket = $_GET["bucket"];


//create the file name

$key = 'cloudguru.txt';


//put the file and data in our bucket

$result = $client->putObject(array(

    'Bucket' => $bucket,

    'Key'    => $key,

    'Body'   => "Hello Cloud Gurus!"

));


//HTML to create our webpage

echo "<h2 align=\"center\">File - $key has been successfully uploaded to $bucket</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<div align = \"center\"><a href=\"readfile.php?bucket=$bucket&key=$key\">Click Here To Read Your File</a></div>";

?>


[root@ip-172-31-80-161 s3]# vi readfile.php 


<?php

//connection string

include 'connecttoaws.php';


//code to get our bucket and key names

$bucket = $_GET["bucket"];

$key = $_GET["key"];


//code to read the file on S3

$result = $client->getObject(array(

    'Bucket' => $bucket,

    'Key'    => $key

));

$data = $result['Body'];


//HTML to create our webpage

echo "<h2 align=\"center\">The Bucket is $bucket</h2>";

echo "<h2 align=\"center\">The Object's name is $key</h2>";

echo "<h2 align=\"center\">The Data in the object is $data</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<div align = \"center\"><a href=\"cleanup.php?bucket=$bucket&key=$key\">Click Here To Remove Files & Bucket</a></div>";

?>

                      

[root@ip-172-31-80-161 s3]# vi cleanup.php 


<?php

//Connection String

include'connecttoaws.php';


//Code to get our bucketname and file name

$bucket = $_GET["bucket"];

$key = $_GET["key"];


//buckets cannot be deleted unless they are empty

//Code to delete our object

$result = $client->deleteObject(array(

    'Bucket' => $bucket,

    'Key'    => $key

));


//code to tell user the file has been deleted.

echo "<h2 align=\"center\">Object $key successfully deleted.</h2>";


//Code to delete our bucket

$result = $client->deleteBucket(array(

    'Bucket' => $bucket

));


//code to create our webpage.

echo "<h2 align=\"center\">Bucket $bucket successfully deleted.</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<h2 align=\"center\">Good Bye Cloud Gurus!</h2>";

?>


http://54.89.219.112/s3/createbucket.php




acloudguru.... buckets are created in my S3. 

Click on the Link.





cloudguru.txt file has been uploaded to the bucket in S3.

Click on the Link.




Click on the Link.







The bucket has been removed.










* Instance Metadata and User Data


curl http://169.254.169.254/latest/meta-data/ (*****)


How to get the public IP address (Exam *****)


[root@ip-172-31-80-161 s3]# curl http://169.254.169.254/latest/meta-data/

ami-id

ami-launch-index

ami-manifest-path

block-device-mapping/

hostname

iam/

instance-action

instance-id

instance-type

local-hostname

local-ipv4

mac

metrics/

network/

placement/

profile

public-hostname

public-ipv4

public-keys/

reservation-id

security-groups

[root@ip-172-31-80-161 s3]# curl http://169.254.169.254/latest/meta-data/public-ipv4

54.89.219.112[root@ip-172-31-80-161 s3]# 


[root@ip-172-31-80-161 s3]# yum install httpd php php-mysql


[root@ip-172-31-80-161 s3]# service httpd start

Starting httpd: 

[root@ip-172-31-80-161 s3]# yum install git


[root@ip-172-31-80-161 s3]# cd /var/www/html

[root@ip-172-31-80-161 html]# git clone https://github.com/acloudguru/metadata

Cloning into 'metadata'...

remote: Counting objects: 9, done.

remote: Total 9 (delta 0), reused 0 (delta 0), pack-reused 9

Unpacking objects: 100% (9/9), done.



[root@ip-172-31-80-161 s3]# cd /var/www/html

[root@ip-172-31-80-161 html]# git clone https://github.com/acloudguru/metadata

Cloning into 'metadata'...

remote: Counting objects: 9, done.

remote: Total 9 (delta 0), reused 0 (delta 0), pack-reused 9

Unpacking objects: 100% (9/9), done.

[root@ip-172-31-80-161 html]# ls -l

total 1848

-rw-r--r-- 1 root root      62 Oct 13 00:04 composer.json

-rw-r--r-- 1 root root   12973 Oct 13 00:04 composer.lock

-rwxr-xr-x 1 root root 1852323 Oct 13 00:04 composer.phar

drwxr-xr-x 3 root root    4096 Oct 13 00:34 metadata

drwxr-xr-x 3 root root    4096 Oct 13 00:15 s3

-rw-r--r-- 1 root root      19 Oct 12 23:52 test.php

drwxr-xr-x 8 root root    4096 Oct 13 00:08 vendor

[root@ip-172-31-80-161 html]# cd metadata

[root@ip-172-31-80-161 metadata]# ls -l

total 8

-rw-r--r-- 1 root root 676 Oct 13 00:34 curlexample.php

-rw-r--r-- 1 root root  11 Oct 13 00:34 README.md

[root@ip-172-31-80-161 metadata]# vi curlexample.php


<?php

        // create curl resource

        $ch = curl_init();

        $publicip = "http://169.254.169.254/latest/meta-data/public-ipv4";


        // set url

        curl_setopt($ch, CURLOPT_URL, "$publicip");


        //return the transfer as a string

        curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);


        // $output contains the output string

        $output = curl_exec($ch);


        // close curl resource to free up system resources

        curl_close($ch);


        // close curl resource to free up system resources

        curl_close($ch);


        //Get the public IP address

        echo "The public IP address for your EC2 instance is $output";

?>


Open a Web Browser
http://54.89.219.112/metadata/curlexample.php





반응형

[AWS Certificate] Developer - AWS CLI memo

2017. 10. 12. 09:19 | Posted by 솔웅


반응형


AWS Command Line Interface



- Getting Started

- CLI Reference

- GitHub Project

- Community Forum






* Terminate Instance from Terminal


1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@IP Address -i EC2KeyPair.pem.txt 

The authenticity of host 'IP address (IP address)' can't be established.

ECDSA key fingerprint is SHA256:..........

Are you sure you want to continue connecting (yes/no)? yes 

Warning: Permanently added '54.175.217.183' (ECDSA) to the list of known hosts.


       __|  __|_  )

       _|  (     /   Amazon Linux AMI

      ___|\___|___|


https://aws.amazon.com/amazon-linux-ami/2017.09-release-notes/

No packages needed for security; 1 packages available

Run "sudo yum update" to apply all updates.

[ec2-user@ip-172-31-89-170 ~]$ sudo su

[root@ip-172-31-89-170 ec2-user]# aws s3 ls

Unable to locate credentials. You can configure credentials by running "aws configure".


==> can't access to s3. 


[root@ip-172-31-89-170 ec2-user]# aws configure

AWS Access Key ID [None]: 


==> Open CSV file you downloaded and enter AWS Access Key ID and AWS Secret.

==> Enter region name


==> type aws s3 ls ==> will display list

==> aws s3 help ==> display all commands



cd ~ -> home directory

ls

cd .aws

ls

nano credentials ==> Access_key_id , secret_access_key



aws ec2 describe-instances ==> display all instances in JSON format


copy instance id of running instance


aws ec2 terminate-instances --instance-ids 'instance id'


==> terminated


When access_key_id and secret_access_key is accidently open to public -> Resolution is Delete the user and re-create it





* Using Role instead of Access Key



------ Identity Access Management Roles Lab -------



- IAM - Create a Role   with S3 full access policy

*** All Roles are for Global (*******) - No need to select Region


- Create EC2 Instance : Assign above role to this instance

==> You can replace Role of existing instance

: Actions - Instance Settings - Attach/Replace IAM role


now was s3 ls works


[root@ip-172-31-81-181 ec2-user]# aws s3 ls

[root@ip-172-31-81-181 ec2-user]#


CLI Commands - Developer Associate Exam


[ec2-user@ip-172-31-81-181 ~]$ sudo su

[root@ip-172-31-81-181 ec2-user]# aws s3 ls

[root@ip-172-31-81-181 ec2-user]# cd ~

[root@ip-172-31-81-181 ~]# ls

[root@ip-172-31-81-181 ~]# cd .aws

bash: cd: .aws: No such file or directory

[root@ip-172-31-81-181 ~]# aws configure

AWS Access Key ID [None]: 

AWS Secret Access Key [None]: 

Default region name [None]: us-east-1

Default output format [None]: 

[root@ip-172-31-81-181 ~]# cd .aws

[root@ip-172-31-81-181 .aws]# ls

config

[root@ip-172-31-81-181 .aws]# cat config

[default]

region = us-east-1

[root@ip-172-31-81-181 .aws]# 


==> can access aws without Access Key ID

Terminate the instance from Terminal

[root@ip-172-31-81-181 .aws]# aws ec2 terminate-instances --instance-ids i-0575b748b9ec9e3fa

{

    "TerminatingInstances": [

        {

            "InstanceId": "i-0575b748b9ec9e3fa", 

            "CurrentState": {

                "Code": 32, 

                "Name": "shutting-down"

            }, 

            "PreviousState": {

                "Code": 16, 

                "Name": "running"

            }

        }

    ]

}

[root@ip-172-31-81-181 .aws]# 

Broadcast message from root@ip-172-31-81-181

(unknown) at 0:02 ...


The system is going down for power off NOW!

Connection to 52.70.118.204 closed by remote host.

Connection to 52.70.118.204 closed.

1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ 





==> Shutting down the instance in AWS console





============ CLI Commands For The Developer Exam


IAM - Create a Role - MyEC2Role - administrator access

Launch new instance - assign the role


ssh ec2-user.......

sudo su

aws configure (enter region only)


docs.aws.amazon.com/cli/latest/reference/ec2/index.html


(*****)

aws ec2 describe-instances

aws ec2 describe-images  - enter image id

aws ec2 run-instances

was ec2 start-instances

(*****)



Do not confuse START-INSTANCES with RUN-INSTANCES

START-INSTANCES - START AND STOP INSTANCE

RUN-INSTANCES - CREATE A NEW INSTANCE


========================

-----S3 CLI & REGIONS


Launch new instance

Create S3 buckets (3 buckets)


Upload a file to one of above bucket

go to another bucket and upload other file.

go to another bucket and upload other file.


IAM - Create a new Role (S3 full access)


EC2 - public IP address


terminal

ssh ec2-user@....

sudo su

aws s3 ls - will not work

attach the role to the EC2 instance

attach/replace IAM role (WEB)

go back to terminal

aws s3 ls -> will display


aws s3 cp --recursive s3://bucket1_name /home/bucket2_name

ls


copy file to bucket





반응형


반응형

* Security Group


- Virtual Firewall

- 1 instance can have multiple security groups



chkconfig httpd on - Apache will turn on when reboot automatically


EC2 - Left Menu - Security Group - Select WebDMZ



Inbound Rules - Delete HTTP rule -> can not access to public IP http://34.228.166.148

*****


Outbound - All traffics - Delete -> can access to public IP address


Edit Inbound Rule -> automatically Edit Outbound Rule


Actions -> Networking - Change Security Group -> can select multiple security group


Tip

- All Inbound Traffic is Blocked By Default

- All Outbound Traffic is Allowed

- Changes to Security Groups take effect immediately

- You can have any number of EC2 instances within a security group.

- you can have multiple security groups attached to EC2 Instances

- Security Groups are STATEFUL (*****) (whereas network access control Lists - Stateless)

  : If you create an inbound rule allowing traffic in, that traffic is automatically allowed back out again

  : You cannot block specific IP addresses using Security Groups, instead use Network Access Control Lists (VPC section)

  

- You can specify allow rules but not deny rules. (*****)






* Upgrading EBS Volume Types 1


Magnetic Storage


lsblk

[root@ip-172-31-19-244 ec2-user]# lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvda    202:0    0   8G  0 disk 

└─xvda1 202:1    0   8G  0 part /

xvdb    202:16   0   8G  0 disk 

[root@ip-172-31-19-244 ec2-user]# mkfs -t ext4 /dev/xvdb

mke2fs 1.42.12 (29-Aug-2014)

Creating filesystem with 2097152 4k blocks and 524288 inodes

Filesystem UUID: 1a4f0040-89b5-4ac0-8345-15ceb7c868fb

Superblock backups stored on blocks: 

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632


Allocating group tables: done                            

Writing inode tables: done                            

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done 


[root@ip-172-31-19-244 ec2-user]# mkdir /changsoopark

[root@ip-172-31-19-244 ec2-user]# mount /dev/xvdb /changsoopark

[root@ip-172-31-19-244 ec2-user]# lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvda    202:0    0   8G  0 disk 

└─xvda1 202:1    0   8G  0 part /

xvdb    202:16   0   8G  0 disk /changsoopark

[root@ip-172-31-19-244 ec2-user]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 16

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

[root@ip-172-31-19-244 changsoopark]# nano test.html

[root@ip-172-31-19-244 changsoopark]# ls -l

total 20

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

-rw-r--r-- 1 root root    19 Oct  5 00:07 test.html

[root@ip-172-31-19-244 changsoopark]# 


unmount the volume - umount /dev/xvdb


[root@ip-172-31-19-244 /]# cd /

[root@ip-172-31-19-244 /]# umount /dev/xvdb

[root@ip-172-31-19-244 /]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 0

[root@ip-172-31-19-244 changsoopark]# 


mount it again and check the folder


[root@ip-172-31-19-244 changsoopark]# cd /

[root@ip-172-31-19-244 /]# mount /dev/xvdb /changsoopark

[root@ip-172-31-19-244 /]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 20

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

-rw-r--r-- 1 root root    19 Oct  5 00:07 test.html

[root@ip-172-31-19-244 changsoopark]# 


unmount it again


[root@ip-172-31-19-244 changsoopark]# cd /

[root@ip-172-31-19-244 /]# umount /dev/xvdb

[root@ip-172-31-19-244 /]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 0

[root@ip-172-31-19-244 changsoopark]# 


- aws.amazon.com : Detach Volume  and Create Snapshot - Create Volume : Select Volume Type


Attach Volume - Select instance and Attach button --> Go to Console


[root@ip-172-31-19-244 changsoopark]# lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvda    202:0    0   8G  0 disk 

└─xvda1 202:1    0   8G  0 part /

xvdf    202:80   0   8G  0 disk               ====> New Volume (partition)

[root@ip-172-31-19-244 changsoopark]# file -s /dev/xvdf

/dev/xvdf: Linux rev 1.0 ext4 filesystem data, UUID=1a4f0040-89b5-4ac0-8345-15ceb7c868fb (extents) (large files) (huge files)

[root@ip-172-31-19-244 changsoopark]# mount /dev/xvdf /changsoopark

[root@ip-172-31-19-244 changsoopark]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 20

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

-rw-r--r-- 1 root root    19 Oct  5 00:07 test.html







Create Volume, Set MySql to volume, mount, unmount, attach, detach, snapshot, remount 

Steps can be in exam





* Upgrading EBS Volume Types 2



Delete Instance - Delete volume and Delete Snapshot seperatly


Exam Tips

- EBS Volumes can be changed on the fly (except for magnetic standard)

- Best practice to stop the EC2 instance and then change the volume

- You can change volume types by taking a snapshot and then using the snapshot to create a new volume

- If you change a volume on the fly you must wait for 6 hours before making another change

- You can scale EBS Volumes up only

- Volumes must be in the same AZ as the EC2 instances






* EFS (Elastic File System) Lab



What is EFS


Amazon Elastic File System (Amazon EFS) is a file storage service for Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon EFS is easy to use and provides a simple interface that allows you to create and configure file systems quickly and easily. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it.


- Supports the Network File System version 4 (NFSv4) protocol

- You only pay for the storage you use (no pre-provisioning required)

- Can scale up to the petabytes

- Can support thousands of concurrent NFS connections

- Data is stored across multiple AZ's within a region

- Read After Write Consistency

- EFS block based storage vs. S3 object based storage






aws.amazon.com

EFS - Create File system - Configure file system access

: VPC - An Amazon EFS file system is accessed by EC2 instances running inside one of your VPCs. Instances connect to a file system by using a network interface called a mount target. Each mount target has an IP address, which we assign automatically or you can specify.


: Create mount targets - Instances connect to a file system by using mount targets you create. We recommend creating a mount target in each of your VPC's Availability Zones so that EC2 instances across your VPC can access the file system.

==> AZ, Subnet, IP address, Security groups

- Tag and Create FIle System -> Done


Create New Instance

Step 1, Step 2 - Default

Step 3 - Default except Subnet => Select the created Submet when Create EFS above

Step 4 - Add Storage


Create another Instance - Select Load Balancer


VPS, Subnet


Define Load Balancer


==> Check EFS







반응형
이전 1 다음