블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 받은 트랙백

글 보관함

calendar

1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        


* Elastic Load Balancer (Exam Tips)






Elastic Load Balancer FAQs

Classic Load Balancer

General

Application Load Balancer

Network Load Balancer






1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@34.228.166.148 -i EC2KeyPair.pem.txt 

Last login: Mon Oct 16 23:10:59 2017 from 208.185.161.249


       __|  __|_  )

       _|  (     /   Amazon Linux AMI

      ___|\___|___|


https://aws.amazon.com/amazon-linux-ami/2017.03-release-notes/

13 package(s) needed for security, out of 33 available

Run "sudo yum update" to apply all updates.

Amazon Linux version 2017.09 is available.

[ec2-user@ip-172-31-24-42 ~]$ sudo su

[root@ip-172-31-24-42 ec2-user]# service httpd status

httpd (pid  8282) is running...

[root@ip-172-31-24-42 ec2-user]# service httpd start

Starting httpd: 

[root@ip-172-31-24-42 ec2-user]# chkconfig httpd on

[root@ip-172-31-24-42 ec2-user]# service httpd status

httpd (pid  8282) is running...

[root@ip-172-31-24-42 ec2-user]# cd /var/www/html

[root@ip-172-31-24-42 html]# ls -l

total 4

-rw-r--r-- 1 root root 185 Sep 10 23:38 index.html

[root@ip-172-31-24-42 html]# nano healthcheck.html


[root@ip-172-31-24-42 html]# ls -l

total 8

-rw-r--r-- 1 root root  28 Oct 16 23:14 healthcheck.html

-rw-r--r-- 1 root root 185 Sep 10 23:38 index.html

[root@ip-172-31-24-42 html]# 


Go to EC2 in aws.amazon.com and click on Load Balancers in left panel.



Classic Load Balancer


Next -> Select Security Group, Configure Security Settings (Next) -> 




Add Tag -> Review -> Close ->





(Edit Instance if there is no State)


- Create Application Load Balancer

-> Almost same as Classic Load Balancer.



-> Provisioning state will be turn to active after a couple of mins.


- Application Load Balancer : Preferred for HTTP/HTTPS (****)

- Classic Load Balancer (*****)


1 subnet = 1 availability zone


- Instances monitored by ELB are reported as ;

  InService, or OutofService

  

- Health Checks check the instance health by talking to it

- Have their own DNS name. You are never given an IP address.

- Read the ELB FAQ for Classic Load Balancers ***

- Delete Load Balancers after complete the test. (It would be paid service)






* SDK's - Exam Tips


https://aws.amazon.com/tools/


- Android, iOS, JavaScript (Browser)

- Java

- .NET

- Node.js

- PHP

- Python

- Ruby

- Go

- C++


Default Region - US-EAST-1

Some have default regions (JAVA)

Some do not (Node.js)





* Lambda (*** - Several questions)


Data Center - IAAS - PAAS - Containers - Serverless


The best way to get started with AWS Lambda is to work through the Getting Started Guide, part of our technical documentation. Within a few minutes, you will be able to deploy and use a AWS Lambda function.



What is Lambda?

- Data Centers

- Hardware

- Assembly Code/Protocols

- High Level Languages

- Operating System

- Application Layer/AWS APIs

- AWS Lambda 


AWS Lambda is a compute service where you can upload your code and create a Lambda function. AWS Lambda takes care of provisioning and managing the servers that you use to run the code. You don't have to worry about operating systems, patching, scaling, etc. You can use Lambda in the following ways.


- As an event-driven compute service where AWS Lambda runs your code in response to events. These events could be changes to data in an Amazon S3 bucket or an Amazon DynamoDB table.

- As a compute service to run our code in response to HTTP requests using Amazon API Gateway or API calls made using AWS SDKs. 


How to user Lambda -> refer to my articles for Alexa Skill development

http://coronasdk.tistory.com/931



What Languages?

Node.js

Java

Python

C#




How is Lambda Priced?

- Number of requests

   First 1 million requests are free. $0.20 per 1 million requests thereafter.


- Duration

  Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms. The price depends on the amount of memory you allocate to your function. You are charged $0.00001667 for every GB-second used.


Why is Lambda cool?

- No Servers!

- Continuous Scaling

- Super cheap!


Lambda - Exam Tips


Lambda scales out (not up) automatically

Lambda functions are independent, 1 event = 1 function

Lambda is serverless

Know what services are serverless! (S3, API Gateway, Lambda Dynamo DB etc. ) -EC2 is not serverless.-

Lambda functions can trigger other lambda functions, 1 event can = x functions if functions trigger other functions

Architectures can get extremely complicated, AWS X-ray allows you to debug what is happening

Lambda can do things globally, you can use it to back up S3 buckets to other S3 buckets etc.

Know your triggers

(Duration time is Maximum 5 mins)



저작자 표시 비영리 동일 조건 변경 허락
신고


* Bash Script


Auto execute scripts when create instance

- Enter script in Advanced Details text box when you create an instance



In this case, system will execute all these scripts when create the instance.

#!/bin/bash

yum update -y

yum install httpd -y

service httpd start

checkconfig httpd on

cd /var/www/html

aws s3 cp s3://mywebsitebucket-changsoo/index.html /var/www/html


: Update system, install Apache, start httpd server and copy index.html from s3 to /var/www/html folder of the Instance



* Install PHP and create php page


Enter scripts below to Advanced Details when you create an instance


#!/bin/bash

yum update -y

yum install httpd24 php56 git -y

service httpd start

chkconfig httpd on

cd /var/www/html

echo "<?php phpinfo();?>" > test.php

git clone https://github.com/acloudguru/s3


navigate to 'public IP address'/test.php in your browser then you can see PHP INFO page




Access to the server through Terminal


1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@54.89.219.112 -i EC2KeyPair.pem.txt 


.....................


[root@ip-172-31-80-161 ec2-user]# cd /var/www/html

[root@ip-172-31-80-161 html]# ls -l

total 8

drwxr-xr-x 3 root root 4096 Oct 12 23:52 s3

-rw-r--r-- 1 root root   19 Oct 12 23:52 test.php

[root@ip-172-31-80-161 html]# 


==> there is a test.php and downloaded s3 folder from cloudguru's GitHub repository



https://docs.aws.amazon.com/aws-sdk-php/v3/guide/getting-started/installation.html



Installing via Composer

Using Composer is the recommended way to install the AWS SDK for PHP. Composer is a dependency management tool for PHP that allows you to declare the dependencies your project needs and installs them into your project.

  1. Install Composer

    curl -sS https://getcomposer.org/installer | php
    
  2. Run the Composer command to install the latest stable version of the SDK:

    php composer.phar require aws/aws-sdk-php
    
  3. Require Composer's autoloader:

    <?php
    require 'vendor/autoload.php';
    

You can find out more on how to install Composer, configure autoloading, and other best-practices for defining dependencies at getcomposer.org.


[root@ip-172-31-80-161 html]# pwd

/var/www/html

[root@ip-172-31-80-161 html]# curl -sS https://getcomposer.org/installer | php

curl: (35) Network file descriptor is not connected

[root@ip-172-31-80-161 html]# curl -sS https://getcomposer.org/installer | php

All settings correct for using Composer

Downloading...


Composer (version 1.5.2) successfully installed to: /var/www/html/composer.phar

Use it: php composer.phar


[root@ip-172-31-80-161 html]# php composer.phar require aws/aws-sdk-php

Do not run Composer as root/super user! See https://getcomposer.org/root for details

Using version ^3.36 for aws/aws-sdk-php

./composer.json has been created

Loading composer repositories with package information

Updating dependencies (including require-dev)

Package operations: 6 installs, 0 updates, 0 removals

  - Installing mtdowling/jmespath.php (2.4.0): Downloading (100%)         

  - Installing psr/http-message (1.0.1): Downloading (100%)         

  - Installing guzzlehttp/psr7 (1.4.2): Downloading (100%)         

  - Installing guzzlehttp/promises (v1.3.1): Downloading (100%)         

  - Installing guzzlehttp/guzzle (6.3.0): Downloading (100%)         

  - Installing aws/aws-sdk-php (3.36.26): Downloading (100%)         

guzzlehttp/guzzle suggests installing psr/log (Required for using the Log middleware)

aws/aws-sdk-php suggests installing aws/aws-php-sns-message-validator (To validate incoming SNS notifications)

aws/aws-sdk-php suggests installing doctrine/cache (To use the DoctrineCacheAdapter)

Writing lock file

Generating autoload files

[root@ip-172-31-80-161 html]# ls -l

total 1844

-rw-r--r-- 1 root root      62 Oct 13 00:04 composer.json

-rw-r--r-- 1 root root   12973 Oct 13 00:04 composer.lock

-rwxr-xr-x 1 root root 1852323 Oct 13 00:04 composer.phar

drwxr-xr-x 3 root root    4096 Oct 12 23:52 s3

-rw-r--r-- 1 root root      19 Oct 12 23:52 test.php

drwxr-xr-x 8 root root    4096 Oct 13 00:04 vendor

[root@ip-172-31-80-161 html]# cd vendor

[root@ip-172-31-80-161 vendor]# ls -l

total 28

-rw-r--r-- 1 root root  178 Oct 13 00:04 autoload.php

drwxr-xr-x 3 root root 4096 Oct 13 00:04 aws

drwxr-xr-x 2 root root 4096 Oct 13 00:04 bin

drwxr-xr-x 2 root root 4096 Oct 13 00:04 composer

drwxr-xr-x 5 root root 4096 Oct 13 00:04 guzzlehttp

drwxr-xr-x 3 root root 4096 Oct 13 00:04 mtdowling

drwxr-xr-x 3 root root 4096 Oct 13 00:04 psr

[root@ip-172-31-80-161 vendor]# vi autoload.php


<?php


// autoload.php @generated by Composer


require_once __DIR__ . '/composer/autoload_real.php';


return ComposerAutoloaderInit818e4cd87569a511144599b49f2b1fed::getLoader();






* Using the PHP to access to S3



[root@ip-172-31-80-161 s3]# ls -l

total 24

-rw-r--r-- 1 root root 796 Oct 12 23:52 cleanup.php

-rw-r--r-- 1 root root 195 Oct 12 23:52 connecttoaws.php

-rw-r--r-- 1 root root 666 Oct 12 23:52 createbucket.php

-rw-r--r-- 1 root root 993 Oct 12 23:52 createfile.php

-rw-r--r-- 1 root root 735 Oct 12 23:52 readfile.php

-rw-r--r-- 1 root root 193 Oct 12 23:52 README.md

[root@ip-172-31-80-161 s3]# vi createbucket.php 


<?php

//copyright 2015 - A Cloud Guru.


//connection string

include 'connecttoaws.php';


// Create a unique bucket name

$bucket = uniqid("acloudguru", true);


// Create our bucket using our unique bucket name

$result = $client->createBucket(array(

    'Bucket' => $bucket

));


//HTML to Create our webpage

echo "<h1 align=\"center\">Hello Cloud Guru!</h1>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<h2 align=\"center\">You have successfully created a bucket called {$bucket}</h2>";

echo "<div align=\"center\"><a href=\"createfile.php?bucket=$bucket\">Click Here to Continue</a></div>";

?>


[root@ip-172-31-80-161 s3]# vi connecttoaws.php 


<?php

// Include the SDK using the Composer autoloader

require '/var/www/html/vendor/autoload.php';

$client = new Aws\S3\S3Client([

    'version' => 'latest',

    'region'  => 'us-east-1'

]);

?>


[root@ip-172-31-80-161 s3]# vi createfile.php 


<?php

//Copyright 2015 A Cloud Guru


//Connection string

include 'connecttoaws.php';


/*

Files in Amazon S3 are called "objects" and are stored in buckets. A specific

object is referred to by its key (or name) and holds data. In this file

we create an object called acloudguru.txt that contains the data

'Hello Cloud Gurus!'

and we upload/put it into our newly created bucket.

*/


//get the bucket name

$bucket = $_GET["bucket"];


//create the file name

$key = 'cloudguru.txt';


//put the file and data in our bucket

$result = $client->putObject(array(

    'Bucket' => $bucket,

    'Key'    => $key,

    'Body'   => "Hello Cloud Gurus!"

));


//HTML to create our webpage

echo "<h2 align=\"center\">File - $key has been successfully uploaded to $bucket</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<div align = \"center\"><a href=\"readfile.php?bucket=$bucket&key=$key\">Click Here To Read Your File</a></div>";

?>


[root@ip-172-31-80-161 s3]# vi readfile.php 


<?php

//connection string

include 'connecttoaws.php';


//code to get our bucket and key names

$bucket = $_GET["bucket"];

$key = $_GET["key"];


//code to read the file on S3

$result = $client->getObject(array(

    'Bucket' => $bucket,

    'Key'    => $key

));

$data = $result['Body'];


//HTML to create our webpage

echo "<h2 align=\"center\">The Bucket is $bucket</h2>";

echo "<h2 align=\"center\">The Object's name is $key</h2>";

echo "<h2 align=\"center\">The Data in the object is $data</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<div align = \"center\"><a href=\"cleanup.php?bucket=$bucket&key=$key\">Click Here To Remove Files & Bucket</a></div>";

?>

                      

[root@ip-172-31-80-161 s3]# vi cleanup.php 


<?php

//Connection String

include'connecttoaws.php';


//Code to get our bucketname and file name

$bucket = $_GET["bucket"];

$key = $_GET["key"];


//buckets cannot be deleted unless they are empty

//Code to delete our object

$result = $client->deleteObject(array(

    'Bucket' => $bucket,

    'Key'    => $key

));


//code to tell user the file has been deleted.

echo "<h2 align=\"center\">Object $key successfully deleted.</h2>";


//Code to delete our bucket

$result = $client->deleteBucket(array(

    'Bucket' => $bucket

));


//code to create our webpage.

echo "<h2 align=\"center\">Bucket $bucket successfully deleted.</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<h2 align=\"center\">Good Bye Cloud Gurus!</h2>";

?>


http://54.89.219.112/s3/createbucket.php




acloudguru.... buckets are created in my S3. 

Click on the Link.





cloudguru.txt file has been uploaded to the bucket in S3.

Click on the Link.




Click on the Link.







The bucket has been removed.










* Instance Metadata and User Data


curl http://169.254.169.254/latest/meta-data/ (*****)


How to get the public IP address (Exam *****)


[root@ip-172-31-80-161 s3]# curl http://169.254.169.254/latest/meta-data/

ami-id

ami-launch-index

ami-manifest-path

block-device-mapping/

hostname

iam/

instance-action

instance-id

instance-type

local-hostname

local-ipv4

mac

metrics/

network/

placement/

profile

public-hostname

public-ipv4

public-keys/

reservation-id

security-groups

[root@ip-172-31-80-161 s3]# curl http://169.254.169.254/latest/meta-data/public-ipv4

54.89.219.112[root@ip-172-31-80-161 s3]# 


[root@ip-172-31-80-161 s3]# yum install httpd php php-mysql


[root@ip-172-31-80-161 s3]# service httpd start

Starting httpd: 

[root@ip-172-31-80-161 s3]# yum install git


[root@ip-172-31-80-161 s3]# cd /var/www/html

[root@ip-172-31-80-161 html]# git clone https://github.com/acloudguru/metadata

Cloning into 'metadata'...

remote: Counting objects: 9, done.

remote: Total 9 (delta 0), reused 0 (delta 0), pack-reused 9

Unpacking objects: 100% (9/9), done.



[root@ip-172-31-80-161 s3]# cd /var/www/html

[root@ip-172-31-80-161 html]# git clone https://github.com/acloudguru/metadata

Cloning into 'metadata'...

remote: Counting objects: 9, done.

remote: Total 9 (delta 0), reused 0 (delta 0), pack-reused 9

Unpacking objects: 100% (9/9), done.

[root@ip-172-31-80-161 html]# ls -l

total 1848

-rw-r--r-- 1 root root      62 Oct 13 00:04 composer.json

-rw-r--r-- 1 root root   12973 Oct 13 00:04 composer.lock

-rwxr-xr-x 1 root root 1852323 Oct 13 00:04 composer.phar

drwxr-xr-x 3 root root    4096 Oct 13 00:34 metadata

drwxr-xr-x 3 root root    4096 Oct 13 00:15 s3

-rw-r--r-- 1 root root      19 Oct 12 23:52 test.php

drwxr-xr-x 8 root root    4096 Oct 13 00:08 vendor

[root@ip-172-31-80-161 html]# cd metadata

[root@ip-172-31-80-161 metadata]# ls -l

total 8

-rw-r--r-- 1 root root 676 Oct 13 00:34 curlexample.php

-rw-r--r-- 1 root root  11 Oct 13 00:34 README.md

[root@ip-172-31-80-161 metadata]# vi curlexample.php


<?php

        // create curl resource

        $ch = curl_init();

        $publicip = "http://169.254.169.254/latest/meta-data/public-ipv4";


        // set url

        curl_setopt($ch, CURLOPT_URL, "$publicip");


        //return the transfer as a string

        curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);


        // $output contains the output string

        $output = curl_exec($ch);


        // close curl resource to free up system resources

        curl_close($ch);


        // close curl resource to free up system resources

        curl_close($ch);


        //Get the public IP address

        echo "The public IP address for your EC2 instance is $output";

?>


Open a Web Browser
http://54.89.219.112/metadata/curlexample.php





저작자 표시 비영리 동일 조건 변경 허락
신고



AWS Command Line Interface



- Getting Started

- CLI Reference

- GitHub Project

- Community Forum






* Terminate Instance from Terminal


1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@IP Address -i EC2KeyPair.pem.txt 

The authenticity of host 'IP address (IP address)' can't be established.

ECDSA key fingerprint is SHA256:..........

Are you sure you want to continue connecting (yes/no)? yes 

Warning: Permanently added '54.175.217.183' (ECDSA) to the list of known hosts.


       __|  __|_  )

       _|  (     /   Amazon Linux AMI

      ___|\___|___|


https://aws.amazon.com/amazon-linux-ami/2017.09-release-notes/

No packages needed for security; 1 packages available

Run "sudo yum update" to apply all updates.

[ec2-user@ip-172-31-89-170 ~]$ sudo su

[root@ip-172-31-89-170 ec2-user]# aws s3 ls

Unable to locate credentials. You can configure credentials by running "aws configure".


==> can't access to s3. 


[root@ip-172-31-89-170 ec2-user]# aws configure

AWS Access Key ID [None]: 


==> Open CSV file you downloaded and enter AWS Access Key ID and AWS Secret.

==> Enter region name


==> type aws s3 ls ==> will display list

==> aws s3 help ==> display all commands



cd ~ -> home directory

ls

cd .aws

ls

nano credentials ==> Access_key_id , secret_access_key



aws ec2 describe-instances ==> display all instances in JSON format


copy instance id of running instance


aws ec2 terminate-instances --instance-ids 'instance id'


==> terminated


When access_key_id and secret_access_key is accidently open to public -> Resolution is Delete the user and re-create it





* Using Role instead of Access Key



------ Identity Access Management Roles Lab -------



- IAM - Create a Role   with S3 full access policy

*** All Roles are for Global (*******) - No need to select Region


- Create EC2 Instance : Assign above role to this instance

==> You can replace Role of existing instance

: Actions - Instance Settings - Attach/Replace IAM role


now was s3 ls works


[root@ip-172-31-81-181 ec2-user]# aws s3 ls

[root@ip-172-31-81-181 ec2-user]#


CLI Commands - Developer Associate Exam


[ec2-user@ip-172-31-81-181 ~]$ sudo su

[root@ip-172-31-81-181 ec2-user]# aws s3 ls

[root@ip-172-31-81-181 ec2-user]# cd ~

[root@ip-172-31-81-181 ~]# ls

[root@ip-172-31-81-181 ~]# cd .aws

bash: cd: .aws: No such file or directory

[root@ip-172-31-81-181 ~]# aws configure

AWS Access Key ID [None]: 

AWS Secret Access Key [None]: 

Default region name [None]: us-east-1

Default output format [None]: 

[root@ip-172-31-81-181 ~]# cd .aws

[root@ip-172-31-81-181 .aws]# ls

config

[root@ip-172-31-81-181 .aws]# cat config

[default]

region = us-east-1

[root@ip-172-31-81-181 .aws]# 


==> can access aws without Access Key ID

Terminate the instance from Terminal

[root@ip-172-31-81-181 .aws]# aws ec2 terminate-instances --instance-ids i-0575b748b9ec9e3fa

{

    "TerminatingInstances": [

        {

            "InstanceId": "i-0575b748b9ec9e3fa", 

            "CurrentState": {

                "Code": 32, 

                "Name": "shutting-down"

            }, 

            "PreviousState": {

                "Code": 16, 

                "Name": "running"

            }

        }

    ]

}

[root@ip-172-31-81-181 .aws]# 

Broadcast message from root@ip-172-31-81-181

(unknown) at 0:02 ...


The system is going down for power off NOW!

Connection to 52.70.118.204 closed by remote host.

Connection to 52.70.118.204 closed.

1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ 





==> Shutting down the instance in AWS console





============ CLI Commands For The Developer Exam


IAM - Create a Role - MyEC2Role - administrator access

Launch new instance - assign the role


ssh ec2-user.......

sudo su

aws configure (enter region only)


docs.aws.amazon.com/cli/latest/reference/ec2/index.html


(*****)

aws ec2 describe-instances

aws ec2 describe-images  - enter image id

aws ec2 run-instances

was ec2 start-instances

(*****)



Do not confuse START-INSTANCES with RUN-INSTANCES

START-INSTANCES - START AND STOP INSTANCE

RUN-INSTANCES - CREATE A NEW INSTANCE


========================

-----S3 CLI & REGIONS


Launch new instance

Create S3 buckets (3 buckets)


Upload a file to one of above bucket

go to another bucket and upload other file.

go to another bucket and upload other file.


IAM - Create a new Role (S3 full access)


EC2 - public IP address


terminal

ssh ec2-user@....

sudo su

aws s3 ls - will not work

attach the role to the EC2 instance

attach/replace IAM role (WEB)

go back to terminal

aws s3 ls -> will display


aws s3 cp --recursive s3://bucket1_name /home/bucket2_name

ls


copy file to bucket





저작자 표시 비영리 동일 조건 변경 허락
신고


* Security Group


- Virtual Firewall

- 1 instance can have multiple security groups



chkconfig httpd on - Apache will turn on when reboot automatically


EC2 - Left Menu - Security Group - Select WebDMZ



Inbound Rules - Delete HTTP rule -> can not access to public IP http://34.228.166.148

*****


Outbound - All traffics - Delete -> can access to public IP address


Edit Inbound Rule -> automatically Edit Outbound Rule


Actions -> Networking - Change Security Group -> can select multiple security group


Tip

- All Inbound Traffic is Blocked By Default

- All Outbound Traffic is Allowed

- Changes to Security Groups take effect immediately

- You can have any number of EC2 instances within a security group.

- you can have multiple security groups attached to EC2 Instances

- Security Groups are STATEFUL (*****) (whereas network access control Lists - Stateless)

  : If you create an inbound rule allowing traffic in, that traffic is automatically allowed back out again

  : You cannot block specific IP addresses using Security Groups, instead use Network Access Control Lists (VPC section)

  

- You can specify allow rules but not deny rules. (*****)






* Upgrading EBS Volume Types 1


Magnetic Storage


lsblk

[root@ip-172-31-19-244 ec2-user]# lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvda    202:0    0   8G  0 disk 

└─xvda1 202:1    0   8G  0 part /

xvdb    202:16   0   8G  0 disk 

[root@ip-172-31-19-244 ec2-user]# mkfs -t ext4 /dev/xvdb

mke2fs 1.42.12 (29-Aug-2014)

Creating filesystem with 2097152 4k blocks and 524288 inodes

Filesystem UUID: 1a4f0040-89b5-4ac0-8345-15ceb7c868fb

Superblock backups stored on blocks: 

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632


Allocating group tables: done                            

Writing inode tables: done                            

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done 


[root@ip-172-31-19-244 ec2-user]# mkdir /changsoopark

[root@ip-172-31-19-244 ec2-user]# mount /dev/xvdb /changsoopark

[root@ip-172-31-19-244 ec2-user]# lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvda    202:0    0   8G  0 disk 

└─xvda1 202:1    0   8G  0 part /

xvdb    202:16   0   8G  0 disk /changsoopark

[root@ip-172-31-19-244 ec2-user]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 16

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

[root@ip-172-31-19-244 changsoopark]# nano test.html

[root@ip-172-31-19-244 changsoopark]# ls -l

total 20

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

-rw-r--r-- 1 root root    19 Oct  5 00:07 test.html

[root@ip-172-31-19-244 changsoopark]# 


unmount the volume - umount /dev/xvdb


[root@ip-172-31-19-244 /]# cd /

[root@ip-172-31-19-244 /]# umount /dev/xvdb

[root@ip-172-31-19-244 /]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 0

[root@ip-172-31-19-244 changsoopark]# 


mount it again and check the folder


[root@ip-172-31-19-244 changsoopark]# cd /

[root@ip-172-31-19-244 /]# mount /dev/xvdb /changsoopark

[root@ip-172-31-19-244 /]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 20

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

-rw-r--r-- 1 root root    19 Oct  5 00:07 test.html

[root@ip-172-31-19-244 changsoopark]# 


unmount it again


[root@ip-172-31-19-244 changsoopark]# cd /

[root@ip-172-31-19-244 /]# umount /dev/xvdb

[root@ip-172-31-19-244 /]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 0

[root@ip-172-31-19-244 changsoopark]# 


- aws.amazon.com : Detach Volume  and Create Snapshot - Create Volume : Select Volume Type


Attach Volume - Select instance and Attach button --> Go to Console


[root@ip-172-31-19-244 changsoopark]# lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvda    202:0    0   8G  0 disk 

└─xvda1 202:1    0   8G  0 part /

xvdf    202:80   0   8G  0 disk               ====> New Volume (partition)

[root@ip-172-31-19-244 changsoopark]# file -s /dev/xvdf

/dev/xvdf: Linux rev 1.0 ext4 filesystem data, UUID=1a4f0040-89b5-4ac0-8345-15ceb7c868fb (extents) (large files) (huge files)

[root@ip-172-31-19-244 changsoopark]# mount /dev/xvdf /changsoopark

[root@ip-172-31-19-244 changsoopark]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 20

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

-rw-r--r-- 1 root root    19 Oct  5 00:07 test.html







Create Volume, Set MySql to volume, mount, unmount, attach, detach, snapshot, remount 

Steps can be in exam





* Upgrading EBS Volume Types 2



Delete Instance - Delete volume and Delete Snapshot seperatly


Exam Tips

- EBS Volumes can be changed on the fly (except for magnetic standard)

- Best practice to stop the EC2 instance and then change the volume

- You can change volume types by taking a snapshot and then using the snapshot to create a new volume

- If you change a volume on the fly you must wait for 6 hours before making another change

- You can scale EBS Volumes up only

- Volumes must be in the same AZ as the EC2 instances






* EFS (Elastic File System) Lab



What is EFS


Amazon Elastic File System (Amazon EFS) is a file storage service for Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon EFS is easy to use and provides a simple interface that allows you to create and configure file systems quickly and easily. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it.


- Supports the Network File System version 4 (NFSv4) protocol

- You only pay for the storage you use (no pre-provisioning required)

- Can scale up to the petabytes

- Can support thousands of concurrent NFS connections

- Data is stored across multiple AZ's within a region

- Read After Write Consistency

- EFS block based storage vs. S3 object based storage






aws.amazon.com

EFS - Create File system - Configure file system access

: VPC - An Amazon EFS file system is accessed by EC2 instances running inside one of your VPCs. Instances connect to a file system by using a network interface called a mount target. Each mount target has an IP address, which we assign automatically or you can specify.


: Create mount targets - Instances connect to a file system by using mount targets you create. We recommend creating a mount target in each of your VPC's Availability Zones so that EC2 instances across your VPC can access the file system.

==> AZ, Subnet, IP address, Security groups

- Tag and Create FIle System -> Done


Create New Instance

Step 1, Step 2 - Default

Step 3 - Default except Subnet => Select the created Submet when Create EFS above

Step 4 - Add Storage


Create another Instance - Select Load Balancer


VPS, Subnet


Define Load Balancer


==> Check EFS







저작자 표시 비영리 동일 조건 변경 허락
신고


Copy from alexa.design/standout

PDF file : 

GuideStandoutSkillFinal.pdf








Seven Qualities of Top-Performing Alexa Skills



Browse through the Alexa Skills Store, and you’ll see the innovations of our developer community on display. Our public catalog features more than 25,000 skills that enable a rich variety of scenarios including hands-free smart device control, on-demand content delivery, immersive adventure games, and more. You’ve created natural and engaging voice experiences that are delighting customer. And you’ve pushed the boundaries of what’s possible with voice to redefine how your customers interact with technology.



Now that you can earn money for eligible skills that drive the highest customer engagement, we know engagement is top of mind for many of you. To help you maximize the impact of your work, we analyzed our skill selection from the customers’ perspective. What makes a skill engaging for customers? And what keeps customers coming back over time? To find out, we examined the skills that see the highest consistent customer engagement. And we learned that these top performers share seven common qualities:



1. The skill makes a task faster and easier with voice 

2. The skill has an intuitive and memorable name
3. The skill sets clear expectations on what it can do 

4. The skill minimizes friction

5. The skill surprises and delights customers 

6. The skill delivers fresh content
7. The skill is consistently reliable



In this guide, we will dive deeper into each quality and provide guidance on how you can incorporate it into your skill. We will also share exemplary skills that you can explore and model after. Leverage these insights to build standout skills that your customers will love.





1 The Skill Makes a Task Faster and 


Easier with Voice



When designing a new skill, make sure it has a clear customer benefit. Your skill should make a task faster and easier with voice. The skill should offer a more convenient user experience than existing methods, be it a light switch or a smartphone app.



Smart home skills, especially those that control multiple smart devices, are a great example of an existing experience made better with voice. They take a known workflow that involves multiple applications and simplify the steps into a single voice command, making the tasks both faster and easier. These skills offer a clear value to the customer.



When choosing your voice project, start with the purpose, or what customers want to accomplish. Then determine the capabilities of your skill and the benefits of using the skill over other options. Make sure your skill has a clear purpose before you start building. Skills that seamlessly integrate into a customer’s routine and provide value are especially popular.



Customers love The Dog Feeder skill because it helps simplify a daily task. Customers simply say, 

“Alexa, ask the dog if we fed her,” and Alexa shares when the dog last ate, giving families an easy way to manage a shared task. The skill addresses a need in the customers’ daily routine and provides value.



If you’re adapting an existing experience for voice, take a voice-first approach to designing and building your skill. In other words, avoid taking a visual experience or an app-first experience and simply adding voice to it. Instead, reimagine the interaction and figure out how to make it faster and easier with voice. Unless you offer an option that is twice as easy as what’s already available, customers don’t have an incentive to leave the UX they already know and adopt a new habit.





2 The Skill Has an Intuitive and 


Memorable Name



Once you’ve determined your skill’s purpose, give it a great name. Your skill’s name should help customers easily discover, understand, and remember your skill. If your skill name is longer and more difficult to say than a similar skill, you’ll risk losing customers—even if your skill offers more functionality. Remember, customers prefer voice because it’s our most natural form of interaction. So be sure to give your skill a name that’s natural to say and easy to grasp.



Take, for example, Daily Affirmation. The skill provides a new uplifting thought every day—just as the name suggests. For skills that deliver fresh content, specifying how often you’ll update the content tells the customers when to come back for more.

 

Even skills with more complex customer offerings can have a simple and memorable name.

The Magic Door is an interactive adventure game that takes customers through a magic door and into an enchanted forest. The name hints at many aspects of this sophisticated skill and is also easy to remember.


Once you’ve got an idea for your skill’s name, say the invocation name out loud, just as a customer would. See if it’s intuitive and easy to say. Let’s take the example of the Sleep and Relaxation Sounds skill. The customer will say something like:



You can see that the invocation name speaks to the value of the skill, flows within the context, and will be easy to remember at bedtime.


Beta testers (or even friends or colleagues) can also help grade the strength of your skill’s name. Ask them what they expect the skill to do based on the name alone. Use their responses to determine whether your skill name clearly articulates your skill’s capabilities and value. After your skill is published, read the customer reviews to identify any gaps between the skill name and the skill experience.



3 The Skill Sets Clear Expectations on 


What It Can Do



When customers first invoke your skill, aim to provide just the right amount of information so customers know how to move forward. Provide not enough information, and customers won’t know what to do. Provide too much, and customers will get overwhelmed and leave. Finding the right balance is key to enabling your customers to seamlessly interact with your skill.



Then, when your users come back for a second visit, offer a different, abbreviated welcome. Since you’ve already introduced yourself, you can dive right in and pick up where you left off, just like you would with another person. When we talk to each other, our first conversation and our tenth conversation are quite different. That’s because we grow more familiar with each other, and our conversations gain context from previous talks. The same should hold true for your skill’s interaction with your customers.



For every interaction, keep Alexa’s responses concise so that your users stay engaged and can easily follow along. Put your skill’s responses to the one-breath test. Read aloud what you’ve written at a conversational pace. If you can say it all in one breath, the length is likely good. If you need to take a breath, consider reducing the length. For a response that includes successive ideas such as steps in a task, read each idea separately. While the entire response may require more than one breath, make sure your response requires breaths between, not during, ideas.



Once you’ve designed your skill, test your skill to make sure it works as you intended. Watch beta testers and customers try to use your skill and see whether you’ve presented the right amount of information to successfully guide them through the interaction.



 Learn more: Voice Design Guide: What Alexa Says

Try: Set Clear Expectations Using Our Code Sample





4 The Skill Minimizes Friction


As you add capabilities to your skill, make sure you don’t introduce unnecessary pain points or friction. Think through the entire interaction flow, and ensure your customers will know how to navigate from one step to the next. Remove any ambiguity that may hinder your customers from moving forward and getting what they’re looking for.



One way to minimize friction is to only add account linking when you truly need it. Account linking provides authentication when you need to associate the identity of the Alexa user with a user in your system. It’s a useful way to collect information that is very difficult to accurately recognize via voice, like email addresses (which often contain homophones like “one” and “1”). But account linking can also introduce friction for customers when they enable a skill as it prevents the process from being completed seamlessly via voice. Therefore, it should only be used when necessary, specifically when the resulting customer value offsets the risk of friction.



If your skill simply needs to persist data between sessions, account linking is not strictly required. The userID attribute provided with the request will identify the same user across sessions unless the customer disables and re-enables your skill. 


Some information, like physical address, is now available via the permissions framework grows, account-linking flows should be limited to authentication scenarios only and not personalization. If you use account linking in your skill, be sure to follow best practices to minimize friction and ensure a smooth customer experience.



Learn more : 10 Tips for Successfully Adding Account Linking to Your Alexa Skill




5 The Skill Surprises and Delights 


Customers


In mobile and web design, it’s important to provide a consistent customer experience every time. Layout, color schemes, and names always stay the same so users don’t have to relearn the UI with each visit. But with voice, it’s important to have variety. People may not mind scanning the same web page times over, but no one wants to have the same conversation time and again.



You can introduce variety throughout your skill to keep the interaction fresh. Think of all the different ways Alexa can welcome your customers, or the many ways Alexa can say “OK” (think: “Got it,” “Thanks,” “Sounds good,” “Great,” and so on). You can use these opportunities to inject variety, color, and humor to your skill. You can even prepare clever responses to customers’ requests for features your skill doesn’t yet support. By seizing these opportunities, you can make your interactions feel more natural, conversational, and even memorable. 



You can also build engagement over time by remembering what your users were doing last.

Storing data in Amazon DynamoDB allows you to add this memory and context to your skill.

Persistence allows you to pause games or guide users through a step-by-step process like creating a recipe, tackling a DIY project, or a playing a game. For example, a game skill with memory enables customers to pause, come back, and pick up right where they left off.






6 The Skill Regularly Provides Fresh 



Content


As we’ve mentioned, customers expect variety in voice interactions. So it’s no surprise that skills that provide fresh content drive more regular usage over time. Fresh content gives customers a reason to return to your skill over time, and when they do, they are rewarded with something new.



This is especially true of flash briefing skills, which are built around the premise of delivering fresh content. When flash briefing skills don’t update as promised, customers tend to leave negative reviews.



However, the value of this quality doesn’t just apply to flash briefing skills; other skills should also get regular content updates. For example, fact skills and trivia skills that don’t evolve over time to offer new facts or questions don’t tend to see consistent engagement. Users may love the experi- ence you’ve built, but if your skill never evolves beyond a set of limited choices, they won’t have reason to keep coming back.


The Jeopardy! skill is a model example of a skill that entices customers with fresh content. The skill serves up six new clues every weekday, giving fans reason to return five times a week.

When building your skill, establish a content workflow that enables you to quickly and easily add new content to your skill. One way to do this is to house your content in a database instead of hardcoding it into your skill to enable fast updates. Once you’ve set up a workflow, adhere to a schedule to make continued updates to your skill. Find ways to add fresh content and continue delighting your customers over time.



Try: Keep Your Customers Engaged with Dynamic Content





7 The Skill Is Consistently Reliable

Even the most compelling and delightful voice experience won’t gain traction if it isn’t available whenever customers ask. To ensure your skill is consistently reliable, configure a professional-grade backend for your skill.


Amazon Web Services offers several solutions that will help you improve the user experience and ensure dependability of your skill as it gains users and handles more intricate content. Try Amazon CloudFront to cache dynamic content and files that require heavy-lifting. This will improve your response time and provide better deliverability.

 

If you’ve built a top-notch skill, it will likely get noticed and highlighted in the Alexa Skills Store. So be sure your backend can support your skill’s moment in the spotlight. Your backend should be able to scale properly to ensure high availability during high-traffic scenarios. If you’re using Amazon DynamoDB, set your tables capacity for reads and writes per second to be much higher than your expected peak throughput. If your skill launches multiple AWS Lambda functions per skill invocation, check to see whether you are nearing the limits for function invocations. If you’re getting close, you can request a limit increase to ensure scalability. To set alarms for unforeseen scenarios, you can use Lambda’s built-in functionality to output logs to Amazon CloudWatch and trigger alarms based on the vents in those logs. 

 


 Once your skill is live, you can use Amazon QuickSight to visualize analytics you track in Amazon Redshift. You can see how your skill is performing, fix user experiences that don't resonate, and double down on what works to make your skill even more impactful.


AWS Promo Credits: If you incur AWS charges related to your skill, you can apply for AWS promotional credits. Once you’ve published a skill, apply to receive a $100 AWS promotional credit and an additional $100 per month in credit.

Apply now. 



Learn more: 5 Ways to Level Up Your Skill with AWS




Build Engaging Skills Your Customers Will Love


Whether you’re building a new skill or upgrading an existing skill, follow these tips to put your best skill forward. By building engaging voice experiences, you can reach and delight customers through tens of millions of devices with Alexa. And you can also enrich your skills over time to grow your expertise in voice design and evolve from a hobbyist to a professional.



It also pays to build highly engaging skills. Every month, developers can earn money for eligible skills that drive the highest customer engagement in seven eligible skill categories. 


Learn more and start building your next skill today.

 






Alexa Skills Kit


The Alexa Skills Kit is a collection of self-service APIs, tools, documentation, and code samples that makes it fast and easy for you to add skills to Alexa. With ASK, you can leverage Amazon’s knowledge and pioneering work in the field of voice design.



Additional Resources


Voice Design Guide

Documentation

Shortcut Start a Skill Now








저작자 표시 비영리 동일 조건 변경 허락
신고


Today I am going to create my Amazon EC2 instance (Amazon Linux), install Apache web server in the instance and create my public web pate.


You can create your own as well. just follow the steps below.


Refer to A Cloud Guru A Certified Developer - Associate lectures for more details.



[AWS Certificate] 로 시작하는 글들은 제가 AWS Certified Developer - Associate  을 준비하면서 배운 내용들을 메모해 두는 글입니다.

이번 글은 EC2 instance 와 어디서나 접근할 수 있는 나의 웹 페이지를 만드는 방법을 정리했습니다.

따라하시면 무료로 리눅스 서버와 개인 홈페이지 공간을 얻을 수 있습니다.




- Navigate to EC2 page. https://console.aws.amazon.com/ec2 And Click on Launch Instance button 





- Select AMI (Amazon Machine Image) as Amazon Linux




Amazon Machine Image


An Amazon Machine Image (AMI) is a special type of virtual appliance that is used to create a virtual machine within the Amazon Elastic Compute Cloud ("EC2"). It serves as the basic unit of deployment for services delivered using EC2.


Amazon Machine Images (AMI)

An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.

An AMI includes the following:

  • A template for the root volume for the instance (for example, an operating system, an application server, and applications)

  • Launch permissions that control which AWS accounts can use the AMI to launch instances

  • A block device mapping that specifies the volumes to attach to the instance when it's launched



- Select the default t2.micro  and Click on Next:Configure Instance Details button


Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload.
























- Set Defaults and Click on Next: Add Storage button



Subnet : 1 Subnet is always equal to 1 Availability (******) Exam


Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and enable new types of cloud computing applications.

There is no Spot capacity for instance type t2.micro in availability zone

VPCs and Subnets

To get started with Amazon Virtual Private Cloud (Amazon VPC), you create a VPC and subnets. For a general overview of Amazon VPC, see What is Amazon VPC?.


VPC and Subnet Basics

A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC.

When you create a VPC, you must specify a range of IPv4 addresses for the VPC in the form of a Classless Inter-Domain Routing (CIDR) block; for example, 10.0.0.0/16. This is the primary CIDR block for your VPC. For more information about CIDR notation, see RFC 4632.

























































- Set as default and Click on Next: Add Tags button



You can Add Amazon EBS Volume Types here.


Amazon EBS Volume Types

Amazon EBS provides the following volume types, which differ in performance characteristics and price, so that you can tailor your storage performance and cost to the needs of your applications. The volumes types fall into two categories:

  • SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS

  • HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS




- Add Tags as much as you need and Click on Next: Configure Security Group button







- Enter Security group Name and Description

- Add HTTP and HTTPS Types

- Click on Review and Launch Button



Security Groups for Your VPC

security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don't specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC.

For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic. This section describes the basic things you need to know about security groups for your VPC and their rules.

You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. For more information about the differences between security groups and network ACLs, see Comparison of Security Groups and Network ACLs.




- Review your configurations and Click on Launch button




- Select 'Create a new key pair' in dropdown menu

- Enter Name the Key pair name

- Click on Download Key Pair

- Click on Launch Instance




- Click on View Instance button




- Now your instance is running



You can see your instance details here.







Not I am going to access to my instance and create my web page.

Open your Terminal (Mac) or Console window (Windows).

and Navigate to the folder where the downloaded key pare file is.




The EC2KeyPair.pem.txt is the one I downloaded now.

MyEC2KeyPair.pem.txt is old one what I've used.


change permission of EC2KeyPair.pem.txt file


CHMOD 400 EC2KeyPair.pem.txt 




Type ssh ec2-user@'your IPv4 Public IP' -I EC2KeyPair.pem.txt

Type yes

and then you can log in to your Amazon Linux Instance


Type sudo su 

You are now with super user permission.




Type yum update -y to update Operation System




Type yum install httpd -y to install Apache Server



navigate to Web root page


cd /var/www/html



There is no file in the folder now.


I am going to my web page now.


Type nano index.html (or vi index.html)


I have created the web page as below to display my blog.


<html>

<h1> iframe - Changsoo's Blog - </h1>


<iframe id="blog"

    title="Changsoo's Blog"

    width="100%"

    height="100%"

    src="http://coronasdk.tistory.com">

</iframe>    

</html>



Now I can see the index.html file in the folder.

I will start my Apache server.


service http start




Now enter 34.228.166.148 in URL bar in your browser then you can see the page below.






You can type my Public DNS (IPv4) to get the page in your browser as well.


http://ec2-34-228-166-148.compute-1.amazonaws.com/




Now I have my Amazon Linux server (EC2 instance) and public web page.






Termination Protection is turned off by default, you must turn it on.


If you want to terminate the instance then


1. Action - Instance Settings - Change Termination Protection



2. Click on Yes, Enable button.




3. Actions - Instance State - Terminate




On an EBS-backed instance, the default action is for the root EBS volume to be deleted when the instance is terminated.

EBS Root Volumes of your DEFAULT AMI's cannot be encrypted.

You can also use a third party tool (such as bit locker etc.) to encrypt the root volume, or this can be done when creating AMI's (lab to follow) in the AWS console or using the API.



저작자 표시 비영리 동일 조건 변경 허락
신고


EC2 (Elastic Compute Cloud)



What is EC2?


Provides resizable compute capacity in the Cloud Designed to make web-scale cloud computing easier A true virtual computing environment Launch instances with a variety of operating systems Run as many or few systems as you desire.




Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.

Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate them from common failure scenarios.


* EC2 Options (***)


On Demand Instances - Pay for compute capacity by the hour with no long-term commitments or upfront payments

With On-Demand instances, you pay for compute capacity by the hour with no long-term commitments or upfront payments. You can increase or decrease your compute capacity depending on the demands of your application and only pay the specified hourly rate for the instances you use. 

On-Demand instances are recommended for:

  • Users that prefer the low cost and flexibility of Amazon EC2 without any up-front payment or long-term commitment
  • Applications with short-term, spiky, or unpredictable workloads that cannot be interrupted
  • Applications being developed or tested on Amazon EC2 for the first time


Reserved Instances- Provide you with a significant discount (up to 75%) compared to On-Demand Instance pricing

Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

For applications that have steady state or predictable usage, Reserved Instances can provide significant savings compared to using On-Demand instances. See How to Purchase Reserved Instances for more information.

Reserved Instances are recommended for:

  • Applications with steady state usage
  • Applications that may require reserved capacity
  • Customers that can commit to using EC2 over a 1 or 3 year term to reduce their total computing costs

Spot Instances - Purchase compute capacity with no upfront commitment and at hourly rates usually lower than the On-Demand rate

Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity for up to 90% off the On-Demand price. Learn More.

Spot instances are recommended for:

  • Applications that have flexible start and end times
  • Applications that are only feasible at very low compute prices
  • Users with urgent computing needs for large amounts of additional capacity
- Remember with spot instances;
: If you terminate the instance, you pay for the hour
: If AWS terminates the spot instance, you get the hour it was terminated in for free


Dedicated Hosts Instances


A Dedicated Host is a physical EC2 server dedicated for your use. Dedicated Hosts can help you reduce costs by allowing you to use your existing server-bound software licenses, including Windows Server, SQL Server, and SUSE Linux Enterprise Server (subject to your license terms), and can also help you meet compliance requirements. Learn more.

  • Can be purchased On-Demand (hourly).
  • Can be purchased as a Reservation for up to 70% off the On-Demand price.


* EC2 Instance Types (*****)


- General Purpose

T2 : Low Cost EC2 Instances with Burstable Performance.

      T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline. The baseline performance and ability to burst are governed by CPU Credits. Each T2 instance receives CPU Credits continuously at a set rate depending on the instance size.  T2 instances accrue CPU Credits when they are idle, and use CPU credits when they are active.  T2 instances are a good choice for workloads that don’t use the full CPU often or consistently, but occasionally need to burst (e.g. web servers, developer environments and databases). For more information see Burstable Performance Instances.


M4M4 instances are the latest generation of General Purpose Instances. This family provides a balance of compute, memory, and network resources, and it is a good choice for many applications.


M3This family includes the M3 instance types and provides a balance of compute, memory, and network resources, and it is a good choice for many applications.


- Compute Optimized

C4 : Highest Compute Performance on Amazon EC2.

       C4 instances are the latest generation of Compute-optimized instances, featuring the highest performing processors and the lowest price/compute performance in EC2.


C3 Features:

  • High Frequency Intel Xeon E5-2680 v2 (Ivy Bridge) Processors
  • Support for Enhanced Networking
  • Support for clustering
  • SSD-backed instance storage


- Memory Optimized


X1X1 Instances are optimized for large-scale, enterprise-class, in-memory applications and have the lowest price per GiB of RAM among Amazon EC2 instance types.


R4R4 instances are optimized for memory-intensive applications and offer better price per GiB of RAM than R3.


R3R3 instances are optimized for memory-intensive applications and offer lower price per GiB of RAM.


- Accelerated Computing


P2P2 instances are intended for general-purpose GPU compute applications. 


G3G3 instances are optimized for graphics-intensive applications.


F1F1 instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs).



- Storage Optimized


I3 : High I/O Instances

This family includes the High Storage Instances that provide Non-Volatile Memory Express (NVMe) SSD backed instance storage optimized for low latency, very high random I/O performance, high sequential read throughput and provide high IOPS at a low cost.


D2D2 instances feature up to 48 TB of HDD-based local storage, deliver high disk throughput, and offer the lowest price per disk throughput performance on Amazon EC2.






Prerequisite concept


What is EBS?


Amazon Elastic Block Store (EBS)


Amazon Elastic Block Store is an AWS block storage system that is best used for storing persistent data. Often incorrectly referred to as Elastic Block Storage, Amazon EBS provides highly available block level storage volumes for use with Amazon EC2 instances.



* Amazon EBS Volume Types


- General Purpose SSD (GP2)

- Provisioned IOPS SSD (IO1)

- Throughput Optimized HDD (ST1)

- Cold HDD (SC1)

- Magnetic (Standard) : can boot OS, Lowest cost per gigabyte




- EBS Consists of;

: SSD, General Purpose - GP2 - (Up to 10,000 IOPS)

: SSD, Provisioned IOPS - I01 - (More than 10,000 IOPS)

: HDD, Throughput Optimized - ST1 - frequently accessed workloads

: HDD, Cold - SC1 - less frequently accessed data.

: HDD, Magnetic - Standard - cheap, infrequently accessed storage


- You cannot mount 1 EBS volume to multiple EC2 instances, instead use EFS.


* IOPS 


Input/output operations per second (IOPS, pronounced eye-ops) is an input/output performance measurement used to characterize computer storage devices like hard disk drives (HDD), solid state drives (SSD), and storage area networks (SAN). Frequently mischaracterized as a 'benchmark', IOPS numbers published by storage device manufacturers do not relate to real-world application performance.[1][2]


아이옵스(Input/Output Operations Per Second, IOPS)는 HDD, SSD, SAN 같은 컴퓨터 저장 장치를 벤치마크하는 데 사용되는 성능 측정 단위다. IOPS는 보통 인텔에서 제공하는 Iometer 같은 벤치마크 프로그램으로 측정된다.


IOPS 측정값은 벤치마크 프로그램에 따라 다르다. 구체적으로는 임의 접근과 순차 접근 여부, 벤치마크 프로그램의 쓰레드 갯수와 큐의 크기, 데이터 블록 크기, 읽기 명령과 쓰기 명령의 비중 등에 따라 달라지며, 이외에도 많은 변수들이 있다. 일반적으로는 종합 IOPS, 임의 접근 읽기(Random Access Read) IOPS, 임의 접근 쓰기(Random Access Write) IOPS, 순차 접근 읽기(Sequential Access Read) IOPS, 순차 접근 


* SSD 


반도체를 이용하여 정보를 저장하는 장치이다. 하드디스크드라이브에 비하여 속도가 빠르고 기계적 지연이나 실패율, 발열·소음도 적으며, 소형화·경량화할 수 있는 장점이 있다.

솔리드 스테이트 드라이브(Solid State Drive)의 영문 머리글자를 딴 약자이다. 하드 디스크 드라이브(HDD)와 비슷하게 동작하면서도 기계적 장치인 HDD와는 달리 반도체를 이용하여 정보를 저장한다. 임의접근을 하여 탐색시간 없이 고속으로 데이터를 입출력할 수 있으면서도 기계적 지연이나 실패율이 현저히 적다. 또 외부의 충격으로 데이터가 손상되지 않으며, 발열·소음 및 전력소모가 적고, 소형화·경량화할 수 있는 장점이 있다.

플래시 방식의 비휘발성 낸드플래시메모리(nand flash memory)나 램(RAM) 방식의 휘발성 DRAM을 사용한다. 플래시 방식은 RAM 방식에 비하면 느리지만 HDD보다는 속도가 빠르며, 비휘발성 메모리를 사용하여 갑자기 정전이 되더라도 데이터가 손상되지 않는다. 반면 DRAM 방식은 빠른 접근이 장점이지만 제품 규격이나 가격, 휘발성이라는 문제점이 있다. 따라서 데이터 저장과 안전성이 높은 플래시메모리 기반의 SSD를 주로 사용한다.

대용량 SSD가 개발되면서 노트북PC나 데스크톱PC에도 활용할 수 있게 되었다.

[네이버 지식백과] SSD [Solid State Drive] (두산백과)


AWS AMI (Amazon Machine Images)

An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.




* Instance Store vs. Amazon EBS


I’m not sure whether to store the data associated with my Amazon EC2 instance in instance store or in an attached Amazon Elastic Block Store (Amazon EBS) volume. Which option is best for me?

Some Amazon EC2 instance types come with a form of directly attached, block-device storage known as the instance store. The instance store is ideal for temporary storage, because the data stored in instance store volumes is not persistent through instance stops, terminations, or hardware failures. You can find more detailed information about the instance store at Amazon EC2 Instance Store.

For data you want to retain longer-term, or if you need to encrypt the data, we recommend using EBS volumes instead. EBS volumes preserve their data through instance stops and terminations, can be easily backed up with EBS snapshots, can be removed from instances and reattached to another, and support full-volume encryption. For more detailed information about EBS volumes, see Features of Amazon EBS.


* Instance Store 

Physically attached to the host computer

Type and amount differs by instance type

Data dependent upon instance lifecycle

Instance store data persists if:

- The OS in the instance is rebooted

- The instance is restarted


Instance store data is lost when:

- An underlying instance drive fails

- And EBS-backed instance is stopped

- The instance is terminated

Virtual Private Cloud

VPC Networking

Elastic Load Balance


* Amazon EBS


Persistent block level storage volumes

Magnetic

General Purpose(SSD)

Provisioned IOPS(SSD)

data independent of instance lifecycle





저작자 표시 비영리 동일 조건 변경 허락
신고


AWS IAM


Amazon Identity and Access Management (IAM) is an implicit service, providing the authentication infrastructure used to authenticate access to the various services.





What Is IAM?

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources for your users. You use IAM to control who can use your AWS resources (authentication) and what resources they can use and in what ways (authorization).


AWS Identity and Access Management (IAM)







Identity Federation : Facebook, Active Directory, Google account etc.

PCI DSS Compliance (Payment Card Industry -PCI- Data Security Standard -DSS- Compliance)

Multi-Factor Authentication - ID+PW and MFA Devices Code (i.e. Google Authenticator etc.)

Password Policy


IAM Policies

: A document that defines one or more permissions

: Can be attached to users, groups and roles

: Written in JavaScript Object Notification(JSON)

: Select from pre-defined AWS list of polices or create your own policy






Concepts to know




Amazon S3

From Wikipedia, the free encyclopedia


Amazon S3 (Simple Storage Service) is a web service offered by Amazon Web Services. Amazon S3 provides storage through web services interfaces (RESTSOAP, and BitTorrent).[1] Amazon launched S3 on its fifth publicly available web service[citation needed], in the United States in March 2006[2] and in Europe in November 2007.[3]

Amazon says that S3 uses the same scalable storage infrastructure that Amazon.com uses to run its own global e-commerce network.[4]

Amazon S3 is reported to store more than 2 trillion objects as of April 2013.[5] This is up from 10 billion as of October 2007,[6] 14 billion in January 2008, 29 billion in October 2008,[7] 52 billion in March 2009,[8] 64 billion objects in August 2009,[9] and 102 billion objects in March 2010.[10] S3 uses include web hosting, image hosting, and storage for backup systems. S3 guarantees 99.9% monthly uptime service-level agreement (SLA),[11] that is, not more than 43 minutes of downtime per month.[12]



SAML

위키백과, 우리 모두의 백과사전.

SAML(Security Assertion Markup Language, 샘엘[1])은 인증 정보 제공자(identity provider)와 서비스 제공자(service provider) 간의 인증 및 인가 데이터를 교환하기 위한 XML 기반의 개방형 표준데이터 포맷이다. 보안 어서션 마크업 언어[2]보안 추가 마크업 언어[3]라고도 한다. SAML은 OASIS 보안 서비스 기술 위원회의 산물이다. SAML은 2001년으로 거슬러 올라가며, 최근의 주요 SAML 업데이트는 2005년에 게시되었다. 그러나 프로토콜 개선은 선택적, 추가 표준들을 통해 꾸준히 추가되어오고 있다.

SAML이 기술하는 가장 중요한 요구사항은 웹 브라우저 통합 인증(SSO)이다. 통합 인증은 인트라넷 수준에서 일반적이지만(이를테면 쿠키를 사용하여) 인트라넷 밖으로 확장하는 것은 문제가 있으며 상호 운용 사유 기술들이 범람하게 되었다. (이 밖에 브라우저 SSO 문제를 해결하기 위한 최근의 접근은 오픈ID 커넥트 프로토콜이 있다)[4]







About SAML 2.0-based Federation

AWS supports identity federation with SAML 2.0 (Security Assertion Markup Language 2.0), an open standard that many identity providers (IdPs) use. This feature enables federated single sign-on (SSO), so users can log into the AWS Management Console or call the AWS APIs without you having to create an IAM user for everyone in your organization. By using SAML, you can simplify the process of configuring federation with AWS, because you can use the IdP's service instead of writing custom identity proxy code.





Identity Broker

Federating users by creating a custom identity broker application


If your identity store is not compatible with SAML 2.0, then you can build a custom identity broker application to perform a similar function. The broker application authenticates users, requests temporary credentials for users from AWS, and then provides them to the user to access AWS resources.




AWS STS (Security Token Service)


The AWS Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). This guide provides descriptions of the STS API. For more detailed information about using this service, go to Temporary Security Credentials.




ADFS


Active Directory Federation Services (ADFS) is a software component developed by Microsoft that can be installed on Windows Server operating systems to provide users with single sign-on access to systems and applications located across organizational boundaries.






Web Identity Federation with Mobile Applications


Introducing Web Identity Federation

AWS Security Token Service (STS) now offers Web Identity Federation (WIF). This allows a developer to federate their application from Facebook, Google, or Amazon with their AWS account, allowing their end users to authenticate with one of these Identity Providers (IdP) and receive temporary AWS credentials. In combination with Policy Variables, WIF allows the developer to restrict end users' access to a subset of AWS resources within their account.

To help you understand how web identity federation works, you can use the Web Identity Federation Playground. This interactive website lets you walk through the process of authenticating via Login with Amazon, Facebook, or Google, getting temporary security credentials, and then using those credentials to make a request to AWS.

This article shows how WIF can be used to give many users a "Personal File Store" all housed within a single Amazon S3 bucket without the need for any backend infrastructure. It is adapted from a previous article which used a custom Token Vending Machine hosted in AWS Elastic Beanstalk.




The AWS Web Identity Federation Playground


We added support for Amazon, Facebook, and Google identity federation to AWS IAM earlier this year. This poweful and important feature gives you the ability to grant temporary security credentials to users managed outside of AWS.

In order to help you to learn more about how this feature works and to make it easier for you to test and debug your applications and websites that make use of it, we have launched the Web Identify Federation Playground:




IAM (Identity Access Management)


Allows you to manage users and their level of access to the AWS Console. It is important to understand IAM and how it works, both for the exam and for administrating a company's AWS account in real life.


* What does IAM give you?

- Centralised control of your AWS account

- Shared Access to your AWS account

- Granular Permissions

- Identity Federation (including Active Directory, Facebook, Linkedin etc.)

- Multifactor Authentication

- Provide temporary access for users/devices and services where necessary

- Allows you to set up your own password rotation policy

- Integrates with many different AWS services

- Supports PCI DSS Compliance


* Critical Terms

Users -End Users 

Groups - A collection of users under one set of permissions

Roles - You create roles and can then assign them to AWS resources

Policies - A document that defines one (or more permissions)


- AWS Identity and Access Management(IAM) allows you to securely control access to AWS services and resources for your users

- Policies which are written in JSON allow you to define granular access to AWS resources

- The people or systems that use our AWS resources, like admins, end users or system that need permissions to access your AWS data

- Groups are a collection of users that all inherit the same set of permissions and can be used to reduce your user management overhead.

- IAM roles can be assumed by anyone who needs them and it does not have an access key or password associated with it.

- AWS also has a list of IAM best practices to ensure that your environment is secure and safe




* Security Token Service (STS)

Grants users limited and temporary access to AWS resources.

Users can come from three sources


- Federation (typically Active Directory)

  : Uses Security Assertion Markup Language (SAML)

  : Grants temporary access based off the users Active Directory credentials. Does not need to be a user in IAM

  : Single sign on allows users to log in to AWS console without assigning IAM credentials


- Federation with Mobile Apps

  : Use Facebook/Amazon/Google or other OpenID providers to log in.

  

- Cross Account Access

  : Let's users from one AWS account access resources in another

  


* Understanding key Terms


- Federation : combining or joining a list of users in one domain (such as IAM) with a list of users in another domain (such as Active Directory, Facebook etc.)


- Identity Broker : a service that allows you to take an identity from point A and join it (federate it) to point B. (*****)


- Identity Store : Services like Active Directory, Facebook, Google etc.


- Identities : a user of a service like Facebook etc.





Recap


* IAM consists of the following

- Users

- Groups (A way to group our users and apply polices to them collectively)

- Roles

- Policy Documents


* Summary

- IAM is universal. It does not apply to regions at this time.

- The "root account" is simply the account created when first setup your AWS account. It has complete Admin access.

- New Users have NO permissions when first created

- New Users are assigned Access Key ID & Secret Access Keys when first created

- These are not the same as a password, and you cannot use the Access key ID & Secret Access Key to Login in to the console. You can use this to access AWS via the APIs and Command Line however.

- You only get to view these once. If you lose them, you have to regenerate them. So save them in a secure location.

- Always setup Multifactor Authentication on your root account.

- You can create and customise your own password rotation policies.





Quiz


IAM 

: IAM allows you to manage users, groups and roles and their corresponding level of access to the AWS Platform

: Centralised control of your AWS account

: Integrates with existing active directory account allowing single sign on

: Fine-grained access control to AWS resources


* Web Identity Federation : Allow users to use their social media account to gain temporary access to the AWS platform


* AssumeRoleWithWebIdentity : API call that used to obtain temporary security credentials when authenticating using Web Identity Federation


* AssumeRoleWithSAML : API call that to request temporary security credentials from the AWS platform when federating with Active Directory


* Steps performing when using active directory to authenticate to AWS

1) The user navigates to ADFS webserver, 2) The user enter in their single sign on credentials, 3) The user's web browser receives a SAML assertion from the AD server, 4) The user's browser then posts the SAML assertion to the AWS SAML end point for SAML and the AssumeRoleWithSAML API request is used to request temporary security credentials. 5) The user is then able to access the AWS Console.


* SAML 

: Security Assertion Markup Language

: AWS sign-in endpoint for SAML is https://signin.aws.amazon.com/saml


* Web Identity Federation steps

1) A user authenticates with facebook first. They are then given an ID token by facebook. An API call called AssumeRoleWithWebIdentity is then used in conjunction with the ID token. A user is then granted temporary security credentials.



저작자 표시 비영리 동일 조건 변경 허락
신고


It is Free !!!!!!


Download 'Lottery Numbers' Mobile app from Amazon App Store.


Or you can see the app in PC or Laptop of yours.


Go to amazon.com and search by 'Lottery Numbers'.






Get Jackpot numbers for Mega Million and/or Powerball.



저작자 표시 비영리 동일 조건 변경 허락
신고


AWS Certifications




Exam : Easy to Hard


Developer Associate - Solutions Architect Associate - Sysops Administrator Associate - Security Specialty - Big Data Specialty - Devops Pro - Advanced Networking Specialty - Solutions Architect Professional



AWS Certified Developer - Associate






AWS Platform





* AWS Global Infrastructure


- Regions

: An Independent collection of AWS computing resources in a defined geography

: Geographical Area, consists of 2 or more Availability Zones


- Availability Zone 

: Data Center, A distinct location within a geographic area designed to provide high availability to a specific geography

: Distinct location from within an AWS region that are engineered to be isolated from failures


- Edge Location : Content Delivery Network (Edge Network Location)




* Networking & Content Delivery


- VPC : Vertual Data Center (*****), Amazon Virtual Private Cloud

            How to build VPC (*****)

- Route53 : (***) AMazon's DNS service, Register Domain name through Route53

            Amazon's highly scaleable DNS service 

- Cloud Front : CDN - Content Delivery Network. Amazon CloudFront는 짧은 지연 시간과 빠른 전송 속도로 최종 사용자에게 데이터, 동영상, 애플리케이션 및 API를 안전하게 전송하는 글로벌 콘텐츠 전송 네트워크(CDN) 서비스입니다. CloudFront는 AWS와 통합되며, 여기에서 AWS란 AWS 글로벌 인프라와 직접 연결된 물리적 위치뿐만 아니라 DDoS를 완화하는 AWS Shield, 애플리케이션의 오리진인 Amazon S3, Elastic Load Balancing 또는 Amazon EC2, 최종 사용자와 가까운 위치에서 사용자 정의 코드를 실행할 수 있는 AWS Lambda 등의 서비스와 원활하게 연동되는 소프트웨어를 모두 의미합니다.

API, AWS Management Console, AWS CloudFormation, CLI 및 SDK와 같이 이미 익숙한 AWS 도구를 사용하여 몇 분 만에 CloudFront를 시작할 수 있습니다. CloudFront는 선결제 금액이나 장기 약정 없이 사용량에 따라 지불하는 간편한 요금 모델을 제공하며, CloudFront에 대한 지원은 기존 AWS Support 구독에 포함되어 있습니다.

- Direct Connect : dedicated line (***)

- EC2 : Compute Cloud, Vertual Machine

- EC2 Container Service (X) - Not in Dev Exam

- Elastic Beanstalk : Simple Distribute Web Application/Service. AWS Elastic Beanstalk는 Java, .NET, PHP, Node.js, Python, Ruby, Go, Docker를 사용하여 Apache, Nginx, Passenger, IIS와 같은 친숙한 서버에서 개발된 웹 애플리케이션 및 서비스를 간편하게 배포하고 확장할 수 있는 서비스입니다.

코드를 업로드하기만 하면 Elastic Beanstalk가 용량 프로비저닝, 로드 밸런싱, 자동 크기 조정부터 시작하여 애플리케이션 상태 모니터링에 이르기까지 배포를 자동으로 처리합니다. 이뿐만 아니라 애플리케이션을 실행하는 데 필요한 AWS 리소스를 완벽하게 제어할 수 있으며 언제든지 기본 리소스에 액세스할 수 있습니다.

- Lambda : 2014, Servers, Upload code, Not in Dev Exam maybe but it will come soon

- Lightsail : 2016, Simple EC2. 



* Storage


- S3 : Simple Storage Service, Vertual .. (*****) , Object based storage, file based storage,  Place to put objects

- Glacier : Data Archive/Backup 

- EFS : Elastic File System  AWS 클라우드에서 Amazon EC2 인스턴스에 사용할 수 있는 간단하고 확장 가능한 파일 스토리지를 제공합니다. Amazon EFS는 사용이 간편하며, 파일 시스템을 쉽고 빠르게 생성 및 구성할 수 있는 간단한 인터페이스를 제공합니다. Amazon EFS에서는 스토리지 용량이 탄력적입니다. 즉, 파일이 추가되고 제거됨에 따라 자동으로 증가하고 줄어듭니다. 이를 통해 애플리케이션은 스토리지가 필요한 순간에 필요한 스토리지를 확보하게 됩니다.

- Storage Gateway : Not in Dev Exam, in Sysop exam



* Databases


- RDS : mySql, PostgreSQL, Aurora etc. Not importent in DEV Exam. 

- DynamoDB : (*****) Non relational database, NoSQL database, 

- Redshift : Dataware housing service, (***), high speed. SQL/BI (Business Intelligence)

- Elasticache : Caching data in Cloud. Amazon ElastiCache는 클라우드에서 인 메모리 데이터 스토어 또는 캐시를 손쉽게 배포, 운영 및 확장할 수 있게 해주는 웹 서비스입니다. 이 서비스는 더 느린 디스크 기반 데이터베이스에 전적으로 의존하기보다는, 빠른 관리형 인 메모리 데이터 스토어에서 정보를 검색할 수 있도록 지원하여 웹 애플리케이션의 성능을 향상시킵니다





* Migration


- Snowball : import/export (Terabyte of data into the Cloud etc.) - Snowball Edge - Not in Dev exam

- DMS : Database Migration Service

- Server Migration Service (SMS) - Not in Dev Exam



* Analytics


- AthenaAmazon Athena는 표준 SQL을 사용해 Amazon S3에 저장된 데이터를 간편하게 분석할 수 있는 대화식 쿼리 서비스입니다. Athena는 서버리스 서비스이므로 관리할 인프라가 없으며 실행한 쿼리에 대해서만 비용을 지불하면 됩니다.

- EMR : Amazon EMR은 관리형 하둡 프레임워크로서 동적으로 확장 가능한 Amazon EC2 인스턴스 전체에서 대량의 데이터를 쉽고 빠르며 비용 효율적으로 처리할 수 있습니다. 또한, Amazon EMR에서 Apache Spark, HBase, Presto  Flink와 같이 널리 사용되는 분산 프레임워크를 실행하고, Amazon S3 및 Amazon DynamoDB와 같은 다른 AWS 데이터 스토어의 데이터와 상호 작용할 수 있습니다.

Amazon EMR은 로그 분석, 웹 인덱싱, 데이터 변환(ETL), 기계 학습, 금융 분석, 과학적 시뮬레이션 및 생물 정보학을 비롯하여 광범위한 빅 데이터 사용 사례를 안전하고 안정적으로 처리합니다.

- Cloud Search - Not in Dev Exam

- Elastic Search - Not in Dev Exam

- Kinesis - Not in Dev Exam

- Data Pipeline - Not in Dev Exam, 

- Quick Sight - Not in Dev Exam - Business Analytics tool



* Security & Identity


- IAM : (************), : Identity and Access Management, AWS Identity and Access Management (IAM) enables you to securely control access to AWS services and resources for your users. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.


- Inspector : Amazon Inspector는 AWS에 배포된 애플리케이션의 보안 및 규정 준수를 개선하는 데 도움이 되는 자동 보안 평가 서비스입니다. Amazon Inspector는 애플리케이션의 취약성 또는 모범 사례와 비교한 차이점을 자동으로 평가합니다. 평가를 수행한 후, Amazon Inspector는 심각도 수준에 따라 우선순위가 지정된 상세한 보안 평가 결과 목록을 제공합니다. 이러한 평가 결과는 직접 검토하거나, Amazon Inspector 콘솔 API를 통해 제공되는 상세한 평가 보고서에 포함된 내용을 확인해도 됩니다.


- Certificate Manager : AWS Certificate Manager는 AWS 서비스에 사용할 SSL/TLS(Secure Sockets Layer/전송 계층 보안) 인증서를 손쉽게 프로비저닝, 관리 및 배포할 수 있게 해주는 서비스입니다. SSL/TLS 인증서는 네트워크 통신을 보안하고 인터넷상에서 웹 사이트 자격 증명을 설정하는 데 사용됩니다. AWS Certificate Manager는 SSL/TLS 인증서를 구매, 업로드 및 갱신하는 데 드는 시간 소모적인 수동 프로세스를 대신 처리해줍니다. AWS Certificate Manager에서는 사용자가 신속하게 인증서를 요청하고, Elastic Load Balancer, Amazon CloudFront 배포, API Gateway 기반 API와 같은 AWS 리소스에 배포한 후, AWS Certificate Manager가 인증서 갱신을 처리하도록 할 수 있습니다. AWS Certificate Manager를 통해 프로비저닝된 SSL/TLS 인증서는 무료입니다. 애플리케이션을 실행하기 위해 생성한 AWS 리소스에 대한 비용만 지불하면 됩니다.


- Directory Service : Microsoft Active Directory(Enterprise Edition)용 AWS Directory Service, 즉 AWS Microsoft AD를 사용하면 디렉터리 인식 워크로드와 AWS 리소스가 AWS 클라우드에서 관리형 Active Directory를 활용할 수 있습니다. Microsoft AD 서비스는 실제 Microsoft Active Directory상에 구축되어 있으며 기존 Active Directory의 데이터를 클라우드와 동기화하거나 클라우드로 복제할 필요가 없습니다. 표준 Active Directory 관리 도구를 사용하여 Group Policy, 트러스트, SSO 등 Active Directory의 기본 기능을 활용할 수 있습니다. Microsoft AD의 경우 Amazon EC2 및 SQL Server용 Amazon RDS 인스턴스를 도메인에 손쉽게 조인하고, Active Directory 사용자 및 그룹으로 Amazon WorkSpaces와 같은 AWS 엔터프라이즈 IT 애플리케이션을 사용할 수 있습니다.


- WAF : Web Application Firewall. AWS WAF는 애플리케이션 가용성에 영향을 주거나, 보안을 약화하거나, 리소스를 과도하게 사용하는 일반적인 웹 취약점 공격으로부터 웹 애플리케이션을 보호하는 데 도움이 되는 웹 애플리케이션 방화벽입니다. AWS WAF를 사용하면 사용자 정의 가능한 웹 보안 규칙을 정의함으로써 어떤 트래픽에 웹 애플리케이션에 대한 액세스를 허용하거나 차단할지 제어할 수 있습니다. AWS WAF에서는 SQL 명령어 주입이나 교차 사이트 스크립팅 등 일반적인 공격 패턴을 차단하는 사용자 지정 규칙과 특정 애플리케이션을 위해 설계된 규칙을 생성할 수 있습니다. 몇 분 이내에 새로운 규칙이 배포되므로 변화하는 트래픽 패턴에 신속하게 대응할 수 있습니다. 또한, AWS WAF에는 웹 보안 규칙의 생성, 배포 및 유지보수를 자동화하는 데 사용할 수 있는 모든 기능을 갖춘 API가 포함되어 있습니다.

- Artifacts (Compliance Report) : 온디맨드 방식으로 액세스하여 AWS의 규정 준수 보고서를 다운로드하고 엄선된 계약을 관리할 수 있는 감사 및 규정 준수 포털입니다.

AWS Artifact에서는 AWS 보안 및 규정 준수 보고서와 엄선된 온라인 계약에 대한 온디맨드 액세스를 제공합니다. AWS Artifact에서 제공하는 보고서에는 SOC(Service Organization Control) 보고서와 PCI(Payment Card Industry) 보고서, 그리고 여러 지역의 인정 기구와 규정 준수 기관에서 AWS 보안 제어의 구현 및 운영 효율성을 입증하는 인증서가 포함되어 있습니다. AWS Artifact에서 제공하는 계약에는 BAA(Business Associate Addendum)와 NDA(Nondisclosure Agreement)이 포함되어 있습니다.



* Management Tools (***)


- Cloud Watch : Monitoring. Amazon CloudWatch는 AWS 클라우드 리소스와 AWS에서 실행되는 애플리케이션을 위한 모니터링 서비스입니다. Amazon CloudWatch를 사용하여 지표를 수집 및 추적하고, 로그 파일을 수집 및 모니터링하며, 경보를 설정하고, AWS 리소스 변경에 자동으로 대응할 수 있습니다. Amazon CloudWatch는 Amazon EC2 인스턴스, Amazon DynamoDB 테이블, Amazon RDS DB 인스턴스 같은 AWS 리소스뿐만 아니라 애플리케이션과 서비스에서 생성된 사용자 정의 지표 및 애플리케이션에서 생성된 모든 로그 파일을 모니터링할 수 있습니다. Amazon CloudWatch를 사용하여 시스템 전반의 리소스 사용률, 애플리케이션 성능, 운영 상태를 파악할 수 있습니다. 이러한 통찰력을 사용하여 문제에 적절히 대응하고 애플리케이션 실행을 원활하게 유지할 수 있습니다.


- Cloud Formation : infrastructure into code, Not in Dev Exam


- Cloud Trail : AWS CloudTrail은 AWS 계정의 거버넌스, 규정 준수, 운영 감사, 위험 감사가 가능한 서비스입니다. CloudTrail을 사용하면 AWS 인프라에서 계정 활동과 관련된 작업을 기록하고 지속적으로 모니터링하며 보관할 수 있습니다. CloudTrail은 AWS Management Console, AWS SDK, 명령줄 도구 및 기타 AWS 서비스를 통해 수행된 작업을 비롯하여 AWS 계정 활동의 이벤트 기록을 제공합니다. 이러한 이벤트 기록을 통해 보안 분석, 리소스 변경 추적, 문제 해결을 간소화할 수 있습니다.


- Opsworks : AWS OpsWorks는 서버 구성을 코드로 취급하는 자동화 플랫폼 Chef를 사용하는 구성 관리 서비스입니다. OpsWorks는 Chef를 사용하여 Amazon Elastic Compute Cloud(Amazon EC2) 인스턴스 또는 온프레미스 컴퓨터 환경에서 서버 구성, 배포, 관리를 자동화합니다. OpsWorks는 AWS OpsWorks for Chef Automate와 AWS OpsWorks Stacks, 이렇게 2가지 상품이 있습니다.


- Config (Manager) : Auditing Environment. AWS Config는 AWS 리소스 구성을 측정, 감사 및 평가하게 해주는 서비스입니다. Config는 AWS 리소스 구성을 지속적으로 모니터링 및 기록하고, 원하는 구성을 기준으로 기록된 구성을 자동으로 평가해 줍니다. Config를 사용하면 AWS 리소스 간 구성 및 관계 변화를 검토하고, 자세한 리소스 구성 기록을 분석하고, 내부 지침에 지정되어 있는 구성을 기준으로 전반적인 규정 준수 여부를 확인할 수 있습니다. 이에 따라 규정 준수 감사, 보안 분석, 변경 관리 및 운영 문제 해결 작업을 간소화할 수 있습니다.


- Service Catalog : Not in Dev Exam. AWS Service Catalog를 사용하는 조직은 AWS에서 사용이 승인된 IT 서비스 카탈로그를 생성하고 관리할 수 있습니다. 이러한 IT 서비스에는 가상 머신 이미지, 서버, 소프트웨어 및 데이터베이스에서 멀티 티어 애플리케이션 아키텍처를 완성하는 모든 서비스가 포함될 수 있습니다. AWS Service Catalog를 사용하면 일반적으로 배포된 IT 서비스를 중앙에서 관리할 수 있고 일관된 거버넌스를 달성하고 규정 준수 요건을 충족하는 데 도움이 되는 동시에 사용자가 필요로 하는 승인된 IT 서비스만을 신속하게 배포할 수 있습니다.


- Trusted Advisor : Optimization. 귀하의 AWS 환경인 Trusted Advisor를 최적화하여 비용 절감, 성능 증대, 보안 개선에 도움을 주기 위한 온라인 리소스는 AWS의 모범 사례에 따라 리소스를 프로비저닝하는 데 도움이 되는 실시간 안내를 제공합니다.


- EC2 System Manager : Amazon EC2 Systems Manager는 소프트웨어 인벤토리 수집, OS 패치 적용, 시스템 이미지 생성, Windows 및 Linux 운영 체제 구성을 자동으로 수행할 수 있는 관리 서비스입니다. 이런 기능을 이용해 시스템 구성 정의 및 추적, 드리프트 예방, EC2 및 온프레미스 구성의 소프트웨어 규정 준수 유지 등을 수행할 수 있습니다. 이 서비스는 클라우드의 확장성과 민첩성을 염두에 두었지만 온프레미스 데이터 센터로 확장 가능한 관리 접근 방식을 제공함으로써 기존 인프라와 AWS의 원활한 연결이 더욱 쉽도록 지원하고 있습니다.

EC2 Systems Manager는 사용이 쉽습니다. EC2 Management Console에서 EC2 Systems Manager에 액세스하여 관리할 인스턴스를 선택한 후 원하는 관리 작업을 정의하기만 하면 됩니다. 현재 EC2 Systems Manager는 EC2 및 온프레미스 리소스를 모두 무료로 관리할 수 있도록 제공됩니다.




* Application Services


- Step Functions : Not in Dev Exam. AWS Step Functions를 사용하면 시각적 워크플로를 사용해 분산 애플리케이션 및 마이크로서비스의 구성 요소를 손쉽게 조정할 수 있습니다. 각각 기능을 수행하는 개별 구성 요소를 사용하여 애플리케이션을 구축하면 애플리케이션을 빠르게 확장하거나 변경할 수 있습니다. Step Functions는 애플리케이션의 기능을 통해 구성 요소와 단계를 조정할 수 있는 안정적인 방법입니다. Step Functions에서는 애플리케이션의 구성 요소를 일련의 단계로 배열 및 시각화할 수 있는 그래픽 콘솔을 제공합니다. 그러므로 손쉽게 다단계 애플리케이션을 구축하고 실행할 수 있습니다. Step Functions가 자동으로 각 단계를 트리거 및 추적하고 오류가 발생할 경우 재시도하므로 애플리케이션이 의도대로 정상적으로 실행됩니다. Step Functions는 각 단계의 상태를 기록합니다. 따라서 무언가 잘못된 경우 빠르게 문제를 진단하고 디버깅할 수 있습니다. 코드를 작성하지 않고 단계를 변경 및 추가할 수 있어 간편하게 애플리케이션을 개선하고 더 빠르게 혁신할 수 있습니다.

AWS Step Functions는 AWS 서버리스 플랫폼의 일부로, 서버리스 애플리케이션을 위해 AWS Lambda 함수를 간편하게 오케스트레이션할 수 있습니다. 또한, Amazon EC2  Amazon ECS와 같은 컴퓨팅 리소스를 사용하는 마이크로 서비스 오케스트레이션에도 Step Functions를 사용할 수 있습니다.

사용자가 애플리케이션을 어떤 규모로도 운영할 수 있도록 AWS Step Functions가 작업 및 기본 인프라를 관리합니다.

- SWF (*****) : Simple Work Flow. Amazon SWF helps developers build, run, and scale background jobs that have parallel or sequential steps. You can think of Amazon SWF as a fully-managed state tracker and task coordinator in the Cloud.

If your app's steps take more than 500 milliseconds to complete, you need to track the state of processing, and you need to recover or retry if a task fails, Amazon SWF can help you.

- API Gateway : Not in Dev Exam : Amazon API Gateway는 어떤 규모에서든 개발자가 API를 손쉽게 생성, 게시, 유지 관리, 모니터링 및 보안할 수 있게 해주는 완전관리형 서비스입니다. AWS Management Console에서 몇 번의 클릭으로 Amazon Elastic Compute Cloud(EC2)에서 실행되는 워크로드, AWS Lambda에서 실행되는 코드, 기타 웹 애플리케이션 등과 같은 백엔드 서비스의 데이터, 비즈니스 로직 또는 기능에 애플리케이션이 액세스하게 해주는 "현관문" 역할을 하는 API를 생성할 수 있습니다. Amazon API Gateway는 트래픽 관리, 권한 부여 및 액세스 제어, 모니터링, API 버전 관리를 비롯해 최대 수십만 건의 동시 API 호출을 수락 및 처리하는 데 관련된 모든 작업을 처리합니다. Amazon API Gateway는 최소 요금 또는 시작 비용이 들지 않습니다. 수신한 API 호출과 전송한 데이터의 양만을 기준으로 요금을 지불하게 됩니다.


- App Stream : Not in Dev Exam. Amazon AppStream 2.0은 안전한 완전관리형 애플리케이션 스트리밍 서비스로서 데스크톱 애플리케이션을 다시 개발할 필요 없이 AWS에서 웹 브라우저를 실행하는 모든 디바이스로 스트리밍합니다. 이 서비스는 사용자가 필요한 애플리케이션에 대한 인스턴스 액세스를 제공함으로써 원하는 사용자가 원하는 디바이스에서 응답성이 뛰어나고, 유동적인 사용자 경험을 구현합니다.


- Elastic Transcoder : Not in Dev Exam. Amazon Elastic Transcoder는 클라우드에서 미디어를 트랜스코딩합니다. 확장성이 뛰어나고 사용하기 쉬우며 비용 효율적인 이 방법을 통해 개발자 및 비즈니스에서 소스 형식의 미디어 파일을 스마트폰, 태블릿, PC 등의 디바이스에서 재생되는 버전으로 변환(또는 "트랜스코딩")할 수 있습니다.


* Developer Tools (Not in Dev Exam)

- CodeCommit : AWS CodeCommit는 기업이 안전하고 확장성이 뛰어난 프라이빗 Git 리포지토리를 쉽게 호스팅할 수 있도록 하는 전체 관리형 소스 제어 서비스입니다. CodeCommit를 사용하면 자체 소스 제어 시스템을 운영하거나 인프라 조정을 염려할 필요가 없습니다. CodeCommit를 사용하면 소스 코드에서 바이너리까지 모든 것을 안전하게 저장할 수 있고 기존 Git 도구와 완벽히 호환됩니다.


- CodeBuild : AWS CodeBuild는 소스 코드를 컴파일하는 단계부터 테스트 실행 후 소프트웨어 패키지를 개발하여 배포하는 단계까지 마칠 수 있는 완전관리형 빌드 서비스입니다. CodeBuild를 사용하면 자체 빌드 서버를 프로비저닝, 관리 및 확장할 필요가 없습니다. CodeBuild는 지속적으로 확장되며 여러 빌드를 동시에 처리하기 때문에 빌드가 대기열에서 대기하지 않습니다. 사전 패키징된 빌드 환경을 사용하면 신속하게 시작할 수 있으며 혹은 자체 빌드 도구를 사용하는 사용자 지정 빌드 환경을 만들 수 있습니다. CodeBuild에서는 컴퓨팅 리소스를 사용하는 시간(분) 단위로 비용이 청구됩니다.


- CodeDeploy : AWS CodeDeploy는 Amazon EC2 인스턴스 및 온프레미스에서 실행 중인 인스턴스를 비롯한 모든 인스턴스에 대한 코드 배포를 자동화하는 서비스입니다. AWS CodeDeploy를 사용하면 새로운 기능을 더욱 쉽고 빠르게 출시할 수 있고, 애플리케이션을 배포하는 동안 가동 중지 시간을 줄이는 데 도움이 되며, 복잡한 애플리케이션 업데이트 작업을 처리할 수 있습니다. AWS CodeDeploy로 소프트웨어 배포를 자동화하면 오류가 발생하기 쉬운 수동 작업을 할 필요가 없어지고 인프라에 따라 서비스가 확장되므로 하나 또는 수천 개의 인스턴스에 손쉽게 배포할 수 있습니다.


- CodePipeline : AWS CodePipeline은 빠르고 안정적인 애플리케이션 및 인프라 업데이트를 위한 지속적 통합 및 지속적 전달 서비스입니다. CodePipeline은 사용자가 정의하는 출시 프로세스 모델에 따라 코드 변경이 있을 때마다 코드를 구축, 테스트 및 배포합니다. 따라서 기능과 업데이트를 신속하고 안정적으로 제공할 수 있습니다. GitHub와 같은 대중적인 타사 서비스에 사용할 수 있는 당사의 사전 구축 플러그인을 사용하거나 사용자 정의 플러그인을 모든 단계의 출시 프로세스에 통합하여 엔드 투 엔드 솔루션을 쉽게 구축할 수 있습니다. AWS CodePipeline은 종량 과금제로 청구됩니다. 선수금이나 장기 약정이 없습니다.



* Mobile Services


- Mobile Hub


- Cognito : Not in Dev Exam. Amazon Cognito를 사용하면 모바일과 웹 앱에 사용자 가입 및 로그인 기능을 손쉽게 추가할 수 있습니다. Amazon Cognito에서는 Facebook, Twitter 또는 Amazon과 같은 소셜 자격 증명 공급자, SAML 자격 증명 솔루션 또는 자체 자격 증명 시스템을 사용하여 사용자를 인증할 수 있는 옵션을 제공합니다. Amazon Cognito는 데이터를 사용자 디바이스에 로컬로 저장할 수 있게 해주므로, 디바이스가 오프라인일 때도 애플리케이션이 작동합니다. 그런 다음 데이터를 사용자 디바이스 간에 동기화할 수 있으므로, 사용하는 디바이스에 관계없이 사용자의 앱 경험이 일관되게 유지됩니다.

Amazon Cognito에서는 사용자 관리, 인증 및 디바이스 간 동기화를 처리하는 솔루션의 구축, 보안 및 확장에 대해 걱정하는 대신 뛰어난 앱 경험을 만드는 데 집중할 수 있습니다.

- Device Farm : for testing Apps. AWS Device Farm은 앱 테스트 서비스로서, 한 번에 많은 디바이스에서 Android, iOS 및 웹 앱을 테스트 및 상호 작용하거나 실시간으로 디바이스에서 문제를 재현할 수 있습니다. 앱을 출시하기 전에 동영상, 스크린샷, 로그 및 성능 데이터를 보고 문제를 정확히 집어내어 수정하십시오. 


- Mobile Analytics : 

- Pinpoint (Not in Dev Exam), Amazon Pinpoint를 사용하면 애플리케이션 또는 백엔드 서비스에서 바로 사용자에게 메시지를 보내거나 맞춤형 캠페인을 실행하여 사용자 참여를 높일 수 있습니다. Amazon Pinpoint는 사용자 행동을 이해하고, 사용자 참여를 유도할 수 있는 최적의 채널을 선택하며, 가장 효과적인 발송 메시지를 결정하고, 최적의 메시지 전달 시간을 예약하고, 사용자 참여를 추적하는 데 도움이 됩니다.

Amazon Pinpoint를 사용하면 이메일, 텍스트 메시지(SMS), 모바일 푸시 알림을 비롯한 다양한 채널을 통해 메시지를 보낼 수 있으므로, 특정 캠페인이나 상호 작용에 가장 적합한 채널을 사용하여 적절한 메시지를 전달할 수 있습니다.

Amazon Pinpoint는 간편하게 시작할 수 있습니다. 콘솔에서 프로젝트를 생성하고, 채널을 선택한 후, 발송 메시지를 정의하여 개별 사용자에게 메시지를 전달할 수 있습니다. 맞춤형 캠페인을 통해 사용자 세그먼트의 참여를 유도할 수 있도록 콘솔에서 타겟 세그먼트, 캠페인 메시지 및 전달 일정을 정의하는 프로세스를 안내합니다. 일단 캠페인이 실행되면 분석을 실행하고 캠페인 효과를 추적할 수 있도록 Amazon Pinpoint가 지표를 제공합니다.

Amazon Pinpoint에는 사전 설정 비용이나 월별 고정 비용이 없습니다. 캠페인 타겟 사용자 수, 발송 메시지 수 및 수집 이벤트 수에 대해서만 비용을 지불하므로 소규모로 시작하여 애플리케이션이 성장함에 따라 규모를 확장할 수 있습니다.



* Business Productivity


- WorkDocs : Not in Dev Exam. Amazon WorkDocs는 사용자 생산성을 개선하는 강력한 관리 제어 기능과 피드백 기능을 갖춘 안전한 완전관리형 엔터프라이즈 스토리지 및 공유 서비스입니다.

사용자는 파일에 코멘트를 달고, 다른 사람에게 전송하여 피드백을 요청할 수 있으며, 파일의 여러 버전을 이메일을 통해 첨부 파일로 전송할 필요 없이 새 버전을 업로드할 수 있습니다. 사용자는 PC, Mac, 태블릿, 휴대폰 등 원하는 디바이스를 사용해 이러한 기능을 활용할 수 있습니다. Amazon WorkDocs는 IT 관리자에게 기존 기업 디렉터리와 통합하는 옵션, 유연한 공유 정책, 데이터 저장 위치에 대한 제어 기능을 제공합니다. 고객은 Amazon WorkDocs 30일 무료 평가판으로 시작할 수 있으며 최대 50명의 사용자에 대해 사용자당 1GB의 스토리지가 제공됩니다.

Amazon WorkDocs는 Amazon WorkDocs SDK를 제공합니다.  Amazon WorkDocs SDK는 Amazon WorkDocs 사이트 리소스에 대한 관리자 및 사용자 수준의 전체 액세스 권한을 부여함으로써 콘텐츠 협업 및 관리 기능을 솔루션과 애플리케이션에 구현하는 복잡성을 제거하였습니다. 또한 새로운 애플리케이션을 개발하거나, 혹은 기존 솔루션 및 애플리케이션과 통합할 수도 있습니다. 그 밖에도 Amazon WorkDocs SDK가 AWS SDK에 포함되어 있기 때문에 보안, 모니터링, 비즈니스 로직, 스토리지 및 애플리케이션 개발에도 AWS 플랫폼의 성능을 쉽게 이용할 수 있습니다.

- WOrkMail : Not in Dev Exam. Amazon WorkMail은 기존 데스크톱 및 모바일 이메일 클라이언트 애플리케이션을 지원하는 안전한 관리형 비즈니스 이메일 및 일정 서비스입니다. Amazon WorkMail은 사용자에게 원하는 클라이언트 애플리케이션(Microsoft Outlook, 기본 iOS 및 Android 이메일 애플리케이션, IMAP 프로토콜을 지원하는 클라이언트 애플리케이션 등)을 사용하거나 웹 브라우저를 통해 직접 이메일, 연락처 및 일정에 원활하게 액세스할 수 있는 기능을 제공합니다. Amazon WorkMail과 기존 기업 디렉터리를 통합하고, 이메일 저널링을 사용하여 규정 준수 요구 사항을 충족하며, 데이터를 암호화하는 키와 데이터가 저장된 위치를 모두 제어할 수 있습니다. 또한, Microsoft Exchange Server와 상호 운용성을 구성할 수 있으므로 Amazon WorkMail을 손쉽게 시작할 수 있습니다.





* Internet of Things


- IoT : Not



* Desktop & App Streaming


- WorkSpaces : Not

- AppStream 2.0 : 



* Artificial Intelligence


- lex

- Polly

- Machine Learning : Not in Dev Exam

- Rekognition



* Messaging (Not in Dev Exam)


- SNS

- SQS

- SES : Simple Email Service




저작자 표시 비영리 동일 조건 변경 허락
신고