반응형
블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

카테고리


반응형

S3 Summary



* Remember that S3 is Object based i.e. allows you to upload files.

* Files can be from 0 Bytes to 5TB

* There is unlimited storage

* Files are stored Buckets

* S3 is a universal namespace, that is, names must be unique globally

* name - i.e. https://s3-eu-west-1.amazonaws.com/acloudgutu


* Read after Write consistency for PUTs of new Objects

* Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)

* S3 Storage Classes/Tiers

  : S3 (durable, immediately available, frequently accessed)

  : S3 - IA (durable, immediately available, infrequently accessed)

  : Reduced Redundancy Storage (data that is easily reproducible, such as thumb nails etc).

  : Glacier - Archived data, where you can wait 3 - 5 hours before accessing


* Remember the core fundamentals of S3

  : Key (name)

  : Value (data)

  : Version ID

  : Metadata

  : Access Control lists

  

* Object based storage only (for files)

* Not suitable to install an operating system on (***)

 


Versioning





* Stores all versions of an object (including all writes and even if you delete an object)

* Great backup tool

* Once enabled, Versioning cannot be disabled, only suspended.

* Integrates with Lifecycle rules

* Versioning's MFA Delete capability, which uses multi-factor authentication, can be used to provide an additional layer of security

* Cross Region Replication, requires versioning enabled on the source bucket

  


Lifecycle Management





* Can be used in conjunction with versioning

* Can be applied to current versions and previous versions

* Following actions can now be done

  : Transition to the Standard-Infrequent Access Storage Class (128Kb and 30 days after the creation date)

  : Archive to the Glacier Storage Class (30 days after IA, if relevant)

  : Permanently Delete

  


CloudFront



* Edge Location - This is the location where content will be cached. This is separate to an AWS Region/AZ

* Origin - This is the origin of all the files that the CDN will distribute. This can be either an S3 Bucket, an EC2 Instance, an Elastic Load Balancer or Route53

* Distribution - This is the name given the CDN which consists of a collection of Edge Locations.

  : Web Distribution - Typically used for Websites

  : RTMP - Used for Media Streaming

* Edge locations are not just READ only, you can write to them too. (i.e. put an object on to them)

* Objects are cached for the life of the TTL (Time To Live)

* You can clear cached objects, but you will be charged.



Securing your buckets



* By default, all newly created buckets are PRIVATE

* You can setup access control to your buckets using

  : Bucket Policies

  : Access Control Lists

* S3 buckets can be configured to create access logs which log all requests made to the S3 bucket. This can be done to another bucket.



Encryption



* In Transit

  : SSL/TLS

* At Rest

  : Server Side Encryption

    - S3 Managed Keys - SSE-S3 (***)

    - AWS Key Management Service, Managed Keys - SSE-KMS (***)

    - Server Side Encryption With Customer Provided Keys - SSE-C (***)

* Client Side Encryption



Storage Gateway



* File Gateway - For flat files, stored directly on S3

* Volume Gateway

  : Stored Volumes - Entire Dataset is stored on site and is asynchronously backed up to S3.

  : Cached Volumes - Entire Dataset is stored on S3 and the most frequently accessed data is cached on site

* Gateway Virtual Tape Library (VTL)

  : Used for backup and uses popular backup applications like NetBackup, Backup Exec, Veam etc.






Snowball



* Snowball

* Snowball Edge

* Snowmobile


* Understand what Snowball is

* Understand what Import Export is

* Snowball Can

  : Import to S3

  : Export from S3

  


S3 Transfer Acceleration



* You can speed up transfers to S3 using S3 transfer acceleration. This costs extra, and has the greatest impact on people who are in far away location.



S3 Static Websites



* You can use S3 to host static websites

* Serverless

* Very cheap, scales automatically

* STATIC only, cannot host dynamic sites



CORS



* Cross Origin Resource Sharing

* Need to enable it on the resources bucket and state the URL for the origin that will be calling the bucket.

i.e. 

http://mybucketname.s3-website.eu-west-2.amazonaws.com - S3 Website

https://s3.eu-west-2.amazonaws/mybucketname      - Bucket



Last few tips



* Write to S3 - HTTP 200 code for a successful write

* You can load files to S3 much faster by enabling multipart upload

* Read the S3 FAQ before taking the exam. It comes up A LOT!





=====================================



S3 Quiz



* The minimum file size allowed on S3 is 0 bytes? True

* If you encrypt a bucket on S3 what encryption does AWS use? 

  ==> Advanced Encryption Standard (AES) 256

* You create a static hosting website in a bucket called "acloudguru" in Japan using S3. What would the new URL End Point be? 

  ==> http://acloudguru.s3-website-ap-northeast-1.amazonaws.com

* You are hosting a static website in an S3 bucket which uses Java script to reference assets in another S3 bucket. For some reason however these assets are not displaying when users browse to the site. What could be the problem?

  ==> You haven't enabled Cross Origin Resource Sharing (CORS) on the bucket where the assets are stored

* What is the HTTP code you would see if once you successfully place a file in an S3 bucket? ==> 200


* S3 provides unlimited storage. ==> True

* What is the maximum file size that can be stored on S3? ==> 5Tb

* What is the largest size file you can transfer to S3 using a PUT operation? 

  ==> The correct answer is 5Gb. After that you must use a multipart upload. This can be an exam question. Please remember this before going in to your exam. Correct! http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html

* If you want to enable a user to download your private data directly from S3, you can insert a pre-signed URL into a web page before giving it to your user. ==> True

* When you first create an S3 bucket, this bucket is publicly accessible by default. ==> False





반응형


반응형

S3 ( Simple Storage Service)


S3 provides developers and IT teams with secure, durable, highly-scalable object storage. Amazon S3 is easy to use, with a simple web services interface to store and retrieve any amount of data from anywhere on the web.


S3 is a safe place to store your files.

It is Object based storage.

The data is spread across multiple devices and facilities.


The Basics

- S3 is Object based i.e. allows you to upload files.

- Files can be from 0 Bytes to 5TB

- There is unlimited storage

- Files are stored in Buckets.

- S3 is a universal namespace, that is, names must be unique globally.

- https://s3-eu-west-1.amazonaws.com/acloudguru

- When you upload a file to S3 you will receive a HTTP 200 code if the upload was successful.


Data Consistency Model For S3 (***)

- Read after Write consistency for PUTS of new objects

- Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)


S3 is a simple key, value store

- S3 is Object based. Objects consist of the following

: Key (this is simply the name of the object)

: Value (This is simply the data and is made up of a sequence of bytes)

: Version ID (Important for versioning)

: Metadata (Data about the data you are storing)

: Subresources

  Access Control Lists

  Torrent

: Built for 99.99% availability for the S3 platform

: Amazon guarantees 99.999999999% durability for S3 information (Remember 11X9's)

: Tiered Storage Available

: Lifecycle Management

: Versioning

: Encryption

: Secure your data using Access Control Lists and Bucket Policies


Storage Tiers/Classes

: S3 - 99.99% availability, 99.999999999% durability, stored redundantly across multiple devices in multiple facilities and is designed to sustain the loss of 2 facilities concurrently

: S3 - IA (Infrequently Accessed) For data that is accessed less frequently, but requires rapid access when needed. Lower fee than S3, but you are charged a retrieval fee.

: Reduced Redundancy Storage - Designed to provide 99.99% durability and 99.99% availability of objects over a given year.

: Glacier - Very cheap, but used for archival only. It takes 3-5 hours to restore from Glacier



What is Glacier?


Glacier is an extremely low-cost storage service for data archival. Amazon Glacier stores data for as little as $0.01 per gigabyte per month, and is optimized for data that is infrequently accessed and for which retrieval times of 3 to 5 hours are suitable.



S3- Charges

-  Charged for

: Storage

: Requests

: Storage Management Pricing

: Data Transfer Pricing

: Transfer Acceleration



What is S3 Transfer Acceleration?



Amazon S3 Transfer Acceleration enables fast, easy and secure transfers of files over long distances between your end users and an S3 bucket.

Transfer Acceleration takes advantage of Amazon CloudFront's globally distributed edge location. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.


Exam Tips for S3 101

- Remember that S3 is Object based i.e. allows you to upload files.

- Files can be from 0 Bytes to 5TB

- There is unlimited storage

- Files are stored in Buckets

- S3 is a universal namespace, that is, names must be unique globally.

- https://s3-eu-west-1.amazonaws.com/acloudguru

- Read after Write consistency for PUTS of new Objects

- Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)

- S3 Storage Classes/Tiers

: S3 (durable, immediately available, frequently accessed)

: S3 - IA (durable, immediately available, infrequently accessed)

: S3 - Reduced Redundancy Storage (data that is easily reproducible, such as thumb nails etc)

: Glacier - Archived data, where you can wait 3-5 hours before accessing.

- Remember the core fundamentals of an S3 object

: key (name)

: value (data

: Version ID

: Metadata

: Subresources

  ACL

  Torrent

- object based storage only (for files) (*****)

- Not suitable to install an operating system on. (*****)

- Successful uploads will generate a HTTP 200 status code.


- Read the S3 FAQ before taking the exam It comes up A LOT! (*****)


================


S3 Essencial - 


Bucket is just folder where you can upload files


- Buckets are a universal name space

- Upload an object to S3 receive a HTTP 200 Code

- S3, S3 - IA, S3 Reduced Redundancy Storage

- Encryption

: Client Side Encryption

: Server Side Encryption

  Server side encryption with Amazon S3 Managed Keys (SSE-S3)

  Server side encryption with KMS (SSE-KMS)

  Server side encryption with Customer Provided Keys (SSE-C)

- Control access to buckets using either a bucket ACL or using Bucket Polices

- BY DEFAULT BUCKETS ARE PRIVATE AND ALL OBJECTS STORED INSIDE THEM ARE PRIVATE


===================


Create a S3 Website


Static page only, no dynamic page (PHP etc.)

Format of URL : bucketname.s3-website-region.amazonaws.com


===================


Cross Origin Resource Sharing (CORS)


Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.



Lambda -> Create Functions

Triggers of Lambda function - ?? *****

Amazon API Gateway

upload html files to S3

IAM - Role, Policy

Route 53 
- Register Domain

Is it really serverless?
- Vendor takes care of provisioning and management of servers
- Vendor is responsible for capacity provisioning and automated scaling
- Moving away from servers and infrastructure concerns should be your goal

=====================







Using Polly to help you pass your exam - A serverless approach

Polly 
- Text-to-Speech : Type statements -> can download it to mp3

Create S3 bucket - 2 buckets

Simple Notification Service (SNS)

DynamoDB table

IAM - create new role : Lambda - Add permissions - attach new policy

Lambda - Create 2 Lambda functions

Add Trigger : SNS 


============================

Using Polly to help you pass your exam - A serverless approach : Part 2

Create 3rd Lambda function (PostReader_GetPosts)

Amazon API Gateway - Create new API (PostReaderAPI)

Go to S3 and deploy the website

=============================


=============================

S3 - Versioning

S3 - Create a Bucket - Enable versioning

Bucket - upload a text file to the bucket - update the file and upload it again
- Click on Latest Version link -> can select a version from dropdown list

Delete the text file - initiate restore => can restore the deleted file
Actions - Delete the Delete Marker

* Stores all version sof an object (including all writes and even if you delete an object)
* Great backup tool
* Once enabled, Versioning cannot be disabled, only suspended.
* Integrates with Lifecycle rules 
* Versioning's MFA Delete capability, which uses multi-factor authentication, can be used to provide an additional layer of security.

==============================================

Cross region replication

S3 - Create a new bucket
Existing and new bucket would be in different region

Existing bucket - Management - Replication - Add Rule - Select options - Select Destination (new bucket) - Enable versioning - Change the storage class - Select IAM role - Save 
Replication enabled
Go to new bucket - not replicated yet
Commend line - pip install awscli etc. 

IAM - Create Group - Attach Policy 
Create a User - Access key ID - Secret....
Terminal - aws configure
Access key ID - 
Secret Access Key - 
default region name - 

aws s3 ls - will show buckets (now there are 2 buckets)

aws s3 cp --recursive s3://existing_bucket s3://new_bucket -> will copy the contents from existing to new bucket

Back to console and check the new bucket - will be the objects from existing bucket

* Versioning must be enabled on both the source and destination buckets.
* Regions must be unique
* Files in an existing bucket are not replicated automatically. All subsequent updated files will be replicated automatically
* You cannot replicate to multiple buckets or use daisy chaining (at this time.)
* Delete markers are replicated
* Deleting individual versions or delete markers will not be replicated
* Understand what Cross Region Replication is at a high level

===================================


Glacier - Data Archival

S3 - Create a bucket - Enable Versioning - all default 

Management - Lifecycle - add lifecycle rule - rule name - Current version, select transition to standard-IA after 30 days - add transition - Select transition to Amazon Glacier after 60 days - previous version - Transition to Standard-IA after 30 days - Select transition to Amazon Glacier after 60 days - Configure expiration - Current/previous version expire after 425 days - Save

* Can be used in conjunction with versioning
* Can be applied to current versions and previous versions
* Following actions can now be done
  - Transition to the Standard - infrequent Access Storage Class (128kb and 30 days after the creation date)
  - Archive to the Glacier Storage Class (30 days after IA, if relevant)
  - Permanently Delete
  
============================================

Cloud Front Overview





A content delivery network (CDN) is a system of distributed servers (network) that deliver webpages and other web content to a user based on the geographic locations of the user, the origin of the webpage and a content delivery server.

CloudFront - Key Terminology
* Edge Location - THis is the location where content will be cached. This is separate to an AWS Region/AZ
* Origin - THis is the origin of all the files that the CDN will distribute. This can be either an S3 Bucket, an EC2 instance, an Elastic Load Balancer or Route 53
* Distribution - THis is the name given the CDN which consistes of a collection of Edge Locations

What is CloudFront

Amazon CloudFront can be used to deliver your entire website, including dynamic, static, streaming, and interactive content using a global network of edge locations. Requests for your content are automatically routed to the nearest edge location, so content is delivered with the best possible performance.

Amazon CloudFront is optimized to work with other Amazon Web Services, like Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Load Balancing, and Amazon Route 53. Amazon CloudFront also works seamlessly with any non-AWS origin server, which stores the original, definitive versions of your file.



CloudFront - Key Terminology

* Web Distribution - Typically used for Websites
* RTMP - Used for Media Streaming

CloudFront - Exam Tips

* Edge Location - This is the location where content will be cached. This is separate to an AWS Region/AZ
* Origin - This is the origin of all the files that the CDN will distribute. This can  be either an S3 Bucket, an EC2 Instance, an Elastic Load Balancer or Route53
* Distribution - This is the name given the CDN which consists of a collection of Edge Locations
  - Web Distribution - Typically used for Websites
  - RTMP - Used for Media Streaming
* Edge locations are not just READ only, you can write to them too. (i.e. put an object on to them)
* Objects are cached for the life of the TTL (Time To Live)
* You can clear cached objects, but you will be charged.

=======================================================

Create CDN

S3 - Create a bucket - upload a file - public permission 
Cloud Front - Service - Distribution - get started (web) - fill in fields - Create

Exam Topic - Distribution - Web, RTMP *****
- Restriction Type : Whitelist, Blacklist
- Invalidations : 

S3- goto Bucket - open the file uploaded  ==> Go to CloudFront - Copy domain name - enter the domain name + /uploaded file name ==> loading faster

CloudFront - Paid service

==========================================================



==========================================================

S3 - Security & Encryption

* By default, all newly created buckets are PRIVATE
* You can setup access control to your buckets using
  - Bucket Policies
  - Access Control Lists
* S3 buckets can be configured to create access logs which log all requests made to the S3 bucket. This can be done to another bucket.

Encryption
* In Transit : 
  - SSL/TLS
* At Rest
  - Server Side Encryption
    : S3 Managed Keys - SSE-S3
    :AWS Key Management Service, managed Keys - SSE-KMS
    : Server Side Encryption with Customer Provided Keys - SSE-C
  - Client Side Encryption
    
==============================================





AWS Storage Gateway is a service that connects an on-premises software appliance with cloud-based storage to provide seamless and secure integration between an organization's on-premises IT environment and AWS's storage infrastructure. The service enables you to securely store data to the AWS cloud for scalable and cost-effective storage.

AWS Storage Gateway's software appliance is available for download as a virtual machine (VM) image that you install on a host in your datacenter. Storage Gateway supports either VMware ESXi or Microsoft Hyper-V. Once you've installed your gateway and associated it with your AWS account through the activation process, you can use the AWS Management Console to create the storage gateway option that is right for you.

* Four Types of Storage Gateways
- File Gateway (NFS)
- Volumes Gateway (iSCSI)
  : Stored Volumes
  : Cache Volumes
- Tape Gateway (VTL)

* File Gateway
Files are stored as objects in your S3 buckets, accessed through a Network File System (NFS) mount point. Ownership, permissions, and timestamps are durably stored in S3 in the user-metadata of the object associated with the file. Once objects are transferred to S3, they can be managed as native S3 objects, and bucket policies such as versioning, lifecycle management, and cross-region replication apply directly to objects stored in your bucket.

* Volume Gateway
The volume interface presents your applications with disk volumes using the iSCSI block protocol.
Data written to these volumes can be asynchronously backed up as point-in-time snapshots of your volumes, and stored in the cloud as Amazon EBS snapshots.
Snapshots are incremental backups that capture only changed blocks. All snapshot storage is also compressed to minimize your storage charges.

* Stored Volumes
Stored volumes let you store your primary data locally, while asynchronously backing up that data to AWS. Stored volumes provide your on-premises applications with low-latency access to their entire datasets, while providing durable, off-site backups. You can create storage volumes and mount them as iSCSI devices from your on-premises application servers. Data written to your stored volumes is stored on your on-premises storage hardware. This data is asynchronously backed up to Amazon Simple Storage Service (Amazon S3) in the form of Amazon Elastic Block Store (Amazon EBS) snapshots. 1GB - 16 TB in size for stored Volumes.

* Cached Volumes
Cached volumes let you use Amazon Simple Storage Service (Amazon S3) as your primary data storage while retaining frequently accessed data locally in your storage gateway. Cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes up to 32 TiB in size and attach to them as iSCSI devices from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon S3 and retains recently read data in your on-premises storage gateway's cache and upload buffer storage. 1GB-32TB in size for Cached Volumes.

* Tape Gateway
Tape Gateway offers a durable, cost-effective solution to archive your data in the AWS Cloud. The VTL interface it provides lets you leverage your existing tape-based backup application infrastructure to store data on virtual tape cartridges that you create on your tape gateway. Each tape gateway is preconfigured with a media changer and tape drives, which are available to your existing client backup applications as iSCSI devices. You add tape cartridges as you need to archive your data. Supported by NetBackup, Backup Exec, Veam etc.

Exam Tips

- File Gateway - For flat files, stored directly on S3.
- Volume Gateway
  : Stored Volumes - Entire Dataset is stored on site and is asynchronously backed up to S3
  : Cached Volumes - Entire Dataset is stored on S3 and the most frequently accessed data is cached on site.
- Gateway Virtual Tape Library (VTL)
  : Used for backup and uses popular backup applications like NetBackup, Backup Exec, Veam etc.
  
=======================================


Import/Export Disk

AWS Import/Export Disk accelerates moving large amounts of data into and out of the AWS cloud using portable storage devices for transport. AWS Import/Export Disk transfers your data directly onto and off of storage devices using Amazon's high-speed internal network and bypassing the Internet.

Types of Snowballs
* Snowball
* Snowball Edge
* Snowmobile




* Snowball
Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS. Using Snowball addresses common challenges with large-scale data transfers including high network costs, long transfer times, and security concerns. Transferring data with Snowball is simple, fast, secure, and can be as little as one-fifth the cost of high-speed Internet.

80TB snowball in all regions. Snowball uses multiple layers of security designed to protect your data including tamper-resistant enclosures, 256-bit encryption, and an industry-standard Trusted Platform Module (TPM) designed to ensure both security and full chain-of-custody of your data. Once the data transfer job has been processed and verified, AWS performs a software erasure of the Snowball appliance.

* Snowball Edge
AWS Snowball Edge is a 100TB data transfer device with on-board storage and compute capabilities. You can use Snowball Edge to move large amounts of data into and out of AWS, as a temporary storage tier for large local datasets, or to support local workloads in remote or offline locations.

Snowball Edge connects to your existing applications and infrastructure using standard storage interfaces, streamlining the data transfer process and minimizing setup and integration. Snowball Edge can cluster together to form a local storage tier and process your data on-premises, helping ensure your applications continue to run even when they are not able to access the cloud.

* Snowmobile
AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data center migration. Transferring data with Snowmobile is secure, fast and cost effective.

Exam Tips

* Understand what Snowball is
* Understand what Import Export is
* Snowball Can
  : Import to S3
  : Export from S3
  
==================================================


S3 Transfer Acceleration utilities the CloudFront Edge Network to accelerate your uploads to S3. Instead of uploading directly to your S3 bucket, you can use a distinct URL to upload directly to an edge location which will then transfer that file to S3. You will get a distinct URL to upload to acloudguru.s3-accelerate.amazonaws.com

S3 - Create a bucket 
Properties - Transfer acceleration - Enabled - Click on the link in the popup window - Check upload speed (Speed Comparison)

========================================





반응형


반응형


EC2 - Summary & Exam TIps


From Cloud Guru lecture in udemy






* Know the differences (pricing models) between (***)

- On Demand 

- Spot

- Reserved

- Dedicated Hosts : 


==> Choose best pricing model for specific requests


* Remember with spot instances;

- If you terminate the instance, you pay for the hour

- if AWS terminates the spot instance, you get the hour it was terminated in for free.



* EC2 Instance Types


Making Sense of AWS EC2 Instance Type Pricing: ECU Vs. vCPU





EBS (Elastic Block Store) Consists of;

- SSD, General Purpose - GP2 (Up to 10,000 IOPS)

- SSD, Provisioned IOPS - I01 (More than 10,000 IOPS)

- HDD, THroughput Optimized - ST1 - frequently accessed workloads

- HDD, Cold - SC1 - less frequently accessed data.

- HDD, Magnetic - Standard - cheap, infrequently accessed storage


* You cannot mount 1 EBS volume to multiple EC2 instances, instead use EFS.



EC2 Lab Exam Tips

* Termination Protection is turned off by default, you must turn it on

* On an EBS-backed instance, the default action is for the root EBS volume to be deleted when the instance is terminated

* Root volumnes cannot be encrypted by default, you need a third party tool (such as bit locker etc.) to encrypt the root volume.

* Additional volumes can be encrypted.


Volumes vs. Snapshots

* Volumes exist on EBS

- Virtual Hard Disk

* Snapshots exist on S3

* You can take a snapshot of a volume, this will store that volume on S3

* Snapshots are point in time copies of Volumes

* Snapshots are incremental, this means that only the blocks that have changed since your last snapshot are moved to S3

* If this is your first snapshot, it may take some time to create


Volumes vs. Snapshots - Security

* Snapshots of encrypted volumes are encrypted automatically

* Volumes restored from encrypted snapshots are encrypted automatically

* You can share snapshots, but only if they are unencrypted.

  - These snapshots can be shared with other AWS accounts or made public


Snapshots of Root Device Volume

* To create a snapshot for Amazon EBS volumes that serve as root devices, you should stop the instance before taking the snapshot.




EBS vs. Instance Store 

* Instance Store Volumes are sometimes called Ephemeral Storage.

* Instance store volumes cannot be stopped. If the underlying host fails, you will lose your data.

* EBS backed instances can be stopped. You will not lose the data on this instance if it is stopped.

* You can reboot both, you will not lose your data.

* By default, both ROOT volumes will be deleted on termination, however with EBS volumes, you can tell AWS to keep the root device volume.


How can I take a snapshot of a RAID Array?

* Problem - Take a snapshot, the snapshot excludes data held in the cache by applications and the OS. This tends not to matter on a single volume, however using multiple volumes in a RAID array, this can be a problem due to interdependencies of the array.


* Solution - Take an application consistent snapshot

- Stop the application from writing to disk

- Flush all chaches to the disk.


- How can we do this?

  Freeze the file system

  Unmount the RAID Array

  Shutting down the associated EC2 instance.

  


Amazon Machine Images 

* AMI's are regional. You can only launch an AMI from the region in which it is stored. However you can copy AMI's to other regions using the console, command line or the Amazon EC2 API.


* Standard Monitoring = 5 Minutes

* Detailed Monitoring = 1 Minute


* CloudWatch is for performance monitoring

* CloudTrail is for auditing


What can I do with Cloudwatch?

* Dashboards - Creates awesome dashboards to see what is happening with your AWS environment

* Alarms - Allows you to set Alarms that notify you when particular thresholds are hit.

* Events - CloudWatch Events helps you to respond to state changes in your AWS resources.

* Logs - CloudWatch Logs helps you to aggregate, monitor, and store logs.


Roles Lab

* Roles are more secure than storing your access key and secret access key on individual EC2 instances.

* Roles are easier to manage

* Roles can be assigned to an EC2 instance AFTER it has been provisioned using both the command line and the AWS console.

* Roles are universal, you can use them in any region.


Instance Meta-data

* Used to get information about an instance (such as public ip)

* curl http://169.254.169.254/latest/meta-data/

* No such thing as user-data for an instance


EFS Features

* Supports the Network File System version 4 (NFSv4) protocol

* You only pay for the storage you use (no pre-provisioning required)

* Can scale up to the petabytes

* Can support thousands of concurrent NFS connections

* Data is stored across multiple AZ's within a region

* Read After Write consistency


What is Lambda?

* AWS Lambda is a compute service where you can upload your code and create a Lambda function. AWS Lambda takes care of provisioning and managing the servers that you use to run the code. You don't have to worry about operating systems, patching, scaling, etc. You can use Lambda in the following ways.


- As an event-driven compute service where AWS Lambda runs your code in response to events. These events could be changes to data in an Amazon S3 bucket or an Amazon DynamoDB table.

- As a compute service to run your code in response to HTTP requests using Amazon API Gateway or API calls made using AWS SDKs. This is what we use at A Cloud Guru





Quiz

- The default region for an SDK is "US-EAST-1"

- AWS SDK supports Python, Ruby, Node.JS, PHO, JAVA (not C++)

- HTTP 5XX is a server side error

- HTTP 4XX is a client side error

- HTTP 3XX is a redirection

- HTTP 2XX is the request was successful

- To find out both private IP address and public IP address of EC2 instance

  => Retrieve the instance Metadata from http://169.254.169.254/latest/meta-data/

- To retrieve instance metadata or userdata you will need to use this IP address

  => http://169.254.169.254

- In order to enable encryption at rest using EC2 and Elastic Block Store you need to

  => Configure encryption when creating the EBS volume

 http://aws.amazon.com/about-aws/whats-new/2014/05/21/Amazon-EBS-encryption-now-available/

- You can have multiple SSL certificates on an Elastic Load Balancer

- Elastic Load Balancers are chargeable

반응형


반응형

* Elastic Load Balancer (Exam Tips)






Elastic Load Balancer FAQs

Classic Load Balancer

General

Application Load Balancer

Network Load Balancer






1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@34.228.166.148 -i EC2KeyPair.pem.txt 

Last login: Mon Oct 16 23:10:59 2017 from 208.185.161.249


       __|  __|_  )

       _|  (     /   Amazon Linux AMI

      ___|\___|___|


https://aws.amazon.com/amazon-linux-ami/2017.03-release-notes/

13 package(s) needed for security, out of 33 available

Run "sudo yum update" to apply all updates.

Amazon Linux version 2017.09 is available.

[ec2-user@ip-172-31-24-42 ~]$ sudo su

[root@ip-172-31-24-42 ec2-user]# service httpd status

httpd (pid  8282) is running...

[root@ip-172-31-24-42 ec2-user]# service httpd start

Starting httpd: 

[root@ip-172-31-24-42 ec2-user]# chkconfig httpd on

[root@ip-172-31-24-42 ec2-user]# service httpd status

httpd (pid  8282) is running...

[root@ip-172-31-24-42 ec2-user]# cd /var/www/html

[root@ip-172-31-24-42 html]# ls -l

total 4

-rw-r--r-- 1 root root 185 Sep 10 23:38 index.html

[root@ip-172-31-24-42 html]# nano healthcheck.html


[root@ip-172-31-24-42 html]# ls -l

total 8

-rw-r--r-- 1 root root  28 Oct 16 23:14 healthcheck.html

-rw-r--r-- 1 root root 185 Sep 10 23:38 index.html

[root@ip-172-31-24-42 html]# 


Go to EC2 in aws.amazon.com and click on Load Balancers in left panel.



Classic Load Balancer


Next -> Select Security Group, Configure Security Settings (Next) -> 




Add Tag -> Review -> Close ->





(Edit Instance if there is no State)


- Create Application Load Balancer

-> Almost same as Classic Load Balancer.



-> Provisioning state will be turn to active after a couple of mins.


- Application Load Balancer : Preferred for HTTP/HTTPS (****)

- Classic Load Balancer (*****)


1 subnet = 1 availability zone


- Instances monitored by ELB are reported as ;

  InService, or OutofService

  

- Health Checks check the instance health by talking to it

- Have their own DNS name. You are never given an IP address.

- Read the ELB FAQ for Classic Load Balancers ***

- Delete Load Balancers after complete the test. (It would be paid service)






* SDK's - Exam Tips


https://aws.amazon.com/tools/


- Android, iOS, JavaScript (Browser)

- Java

- .NET

- Node.js

- PHP

- Python

- Ruby

- Go

- C++


Default Region - US-EAST-1

Some have default regions (JAVA)

Some do not (Node.js)





* Lambda (*** - Several questions)


Data Center - IAAS - PAAS - Containers - Serverless


The best way to get started with AWS Lambda is to work through the Getting Started Guide, part of our technical documentation. Within a few minutes, you will be able to deploy and use a AWS Lambda function.



What is Lambda?

- Data Centers

- Hardware

- Assembly Code/Protocols

- High Level Languages

- Operating System

- Application Layer/AWS APIs

- AWS Lambda 


AWS Lambda is a compute service where you can upload your code and create a Lambda function. AWS Lambda takes care of provisioning and managing the servers that you use to run the code. You don't have to worry about operating systems, patching, scaling, etc. You can use Lambda in the following ways.


- As an event-driven compute service where AWS Lambda runs your code in response to events. These events could be changes to data in an Amazon S3 bucket or an Amazon DynamoDB table.

- As a compute service to run our code in response to HTTP requests using Amazon API Gateway or API calls made using AWS SDKs. 


How to user Lambda -> refer to my articles for Alexa Skill development

http://coronasdk.tistory.com/931



What Languages?

Node.js

Java

Python

C#




How is Lambda Priced?

- Number of requests

   First 1 million requests are free. $0.20 per 1 million requests thereafter.


- Duration

  Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms. The price depends on the amount of memory you allocate to your function. You are charged $0.00001667 for every GB-second used.


Why is Lambda cool?

- No Servers!

- Continuous Scaling

- Super cheap!


Lambda - Exam Tips


Lambda scales out (not up) automatically

Lambda functions are independent, 1 event = 1 function

Lambda is serverless

Know what services are serverless! (S3, API Gateway, Lambda Dynamo DB etc. ) -EC2 is not serverless.-

Lambda functions can trigger other lambda functions, 1 event can = x functions if functions trigger other functions

Architectures can get extremely complicated, AWS X-ray allows you to debug what is happening

Lambda can do things globally, you can use it to back up S3 buckets to other S3 buckets etc.

Know your triggers

(Duration time is Maximum 5 mins)



반응형


반응형

* Bash Script


Auto execute scripts when create instance

- Enter script in Advanced Details text box when you create an instance



In this case, system will execute all these scripts when create the instance.

#!/bin/bash

yum update -y

yum install httpd -y

service httpd start

checkconfig httpd on

cd /var/www/html

aws s3 cp s3://mywebsitebucket-changsoo/index.html /var/www/html


: Update system, install Apache, start httpd server and copy index.html from s3 to /var/www/html folder of the Instance



* Install PHP and create php page


Enter scripts below to Advanced Details when you create an instance


#!/bin/bash

yum update -y

yum install httpd24 php56 git -y

service httpd start

chkconfig httpd on

cd /var/www/html

echo "<?php phpinfo();?>" > test.php

git clone https://github.com/acloudguru/s3


navigate to 'public IP address'/test.php in your browser then you can see PHP INFO page




Access to the server through Terminal


1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@54.89.219.112 -i EC2KeyPair.pem.txt 


.....................


[root@ip-172-31-80-161 ec2-user]# cd /var/www/html

[root@ip-172-31-80-161 html]# ls -l

total 8

drwxr-xr-x 3 root root 4096 Oct 12 23:52 s3

-rw-r--r-- 1 root root   19 Oct 12 23:52 test.php

[root@ip-172-31-80-161 html]# 


==> there is a test.php and downloaded s3 folder from cloudguru's GitHub repository



https://docs.aws.amazon.com/aws-sdk-php/v3/guide/getting-started/installation.html



Installing via Composer

Using Composer is the recommended way to install the AWS SDK for PHP. Composer is a dependency management tool for PHP that allows you to declare the dependencies your project needs and installs them into your project.

  1. Install Composer

    curl -sS https://getcomposer.org/installer | php
    
  2. Run the Composer command to install the latest stable version of the SDK:

    php composer.phar require aws/aws-sdk-php
    
  3. Require Composer's autoloader:

    <?php
    require 'vendor/autoload.php';
    

You can find out more on how to install Composer, configure autoloading, and other best-practices for defining dependencies at getcomposer.org.


[root@ip-172-31-80-161 html]# pwd

/var/www/html

[root@ip-172-31-80-161 html]# curl -sS https://getcomposer.org/installer | php

curl: (35) Network file descriptor is not connected

[root@ip-172-31-80-161 html]# curl -sS https://getcomposer.org/installer | php

All settings correct for using Composer

Downloading...


Composer (version 1.5.2) successfully installed to: /var/www/html/composer.phar

Use it: php composer.phar


[root@ip-172-31-80-161 html]# php composer.phar require aws/aws-sdk-php

Do not run Composer as root/super user! See https://getcomposer.org/root for details

Using version ^3.36 for aws/aws-sdk-php

./composer.json has been created

Loading composer repositories with package information

Updating dependencies (including require-dev)

Package operations: 6 installs, 0 updates, 0 removals

  - Installing mtdowling/jmespath.php (2.4.0): Downloading (100%)         

  - Installing psr/http-message (1.0.1): Downloading (100%)         

  - Installing guzzlehttp/psr7 (1.4.2): Downloading (100%)         

  - Installing guzzlehttp/promises (v1.3.1): Downloading (100%)         

  - Installing guzzlehttp/guzzle (6.3.0): Downloading (100%)         

  - Installing aws/aws-sdk-php (3.36.26): Downloading (100%)         

guzzlehttp/guzzle suggests installing psr/log (Required for using the Log middleware)

aws/aws-sdk-php suggests installing aws/aws-php-sns-message-validator (To validate incoming SNS notifications)

aws/aws-sdk-php suggests installing doctrine/cache (To use the DoctrineCacheAdapter)

Writing lock file

Generating autoload files

[root@ip-172-31-80-161 html]# ls -l

total 1844

-rw-r--r-- 1 root root      62 Oct 13 00:04 composer.json

-rw-r--r-- 1 root root   12973 Oct 13 00:04 composer.lock

-rwxr-xr-x 1 root root 1852323 Oct 13 00:04 composer.phar

drwxr-xr-x 3 root root    4096 Oct 12 23:52 s3

-rw-r--r-- 1 root root      19 Oct 12 23:52 test.php

drwxr-xr-x 8 root root    4096 Oct 13 00:04 vendor

[root@ip-172-31-80-161 html]# cd vendor

[root@ip-172-31-80-161 vendor]# ls -l

total 28

-rw-r--r-- 1 root root  178 Oct 13 00:04 autoload.php

drwxr-xr-x 3 root root 4096 Oct 13 00:04 aws

drwxr-xr-x 2 root root 4096 Oct 13 00:04 bin

drwxr-xr-x 2 root root 4096 Oct 13 00:04 composer

drwxr-xr-x 5 root root 4096 Oct 13 00:04 guzzlehttp

drwxr-xr-x 3 root root 4096 Oct 13 00:04 mtdowling

drwxr-xr-x 3 root root 4096 Oct 13 00:04 psr

[root@ip-172-31-80-161 vendor]# vi autoload.php


<?php


// autoload.php @generated by Composer


require_once __DIR__ . '/composer/autoload_real.php';


return ComposerAutoloaderInit818e4cd87569a511144599b49f2b1fed::getLoader();






* Using the PHP to access to S3



[root@ip-172-31-80-161 s3]# ls -l

total 24

-rw-r--r-- 1 root root 796 Oct 12 23:52 cleanup.php

-rw-r--r-- 1 root root 195 Oct 12 23:52 connecttoaws.php

-rw-r--r-- 1 root root 666 Oct 12 23:52 createbucket.php

-rw-r--r-- 1 root root 993 Oct 12 23:52 createfile.php

-rw-r--r-- 1 root root 735 Oct 12 23:52 readfile.php

-rw-r--r-- 1 root root 193 Oct 12 23:52 README.md

[root@ip-172-31-80-161 s3]# vi createbucket.php 


<?php

//copyright 2015 - A Cloud Guru.


//connection string

include 'connecttoaws.php';


// Create a unique bucket name

$bucket = uniqid("acloudguru", true);


// Create our bucket using our unique bucket name

$result = $client->createBucket(array(

    'Bucket' => $bucket

));


//HTML to Create our webpage

echo "<h1 align=\"center\">Hello Cloud Guru!</h1>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<h2 align=\"center\">You have successfully created a bucket called {$bucket}</h2>";

echo "<div align=\"center\"><a href=\"createfile.php?bucket=$bucket\">Click Here to Continue</a></div>";

?>


[root@ip-172-31-80-161 s3]# vi connecttoaws.php 


<?php

// Include the SDK using the Composer autoloader

require '/var/www/html/vendor/autoload.php';

$client = new Aws\S3\S3Client([

    'version' => 'latest',

    'region'  => 'us-east-1'

]);

?>


[root@ip-172-31-80-161 s3]# vi createfile.php 


<?php

//Copyright 2015 A Cloud Guru


//Connection string

include 'connecttoaws.php';


/*

Files in Amazon S3 are called "objects" and are stored in buckets. A specific

object is referred to by its key (or name) and holds data. In this file

we create an object called acloudguru.txt that contains the data

'Hello Cloud Gurus!'

and we upload/put it into our newly created bucket.

*/


//get the bucket name

$bucket = $_GET["bucket"];


//create the file name

$key = 'cloudguru.txt';


//put the file and data in our bucket

$result = $client->putObject(array(

    'Bucket' => $bucket,

    'Key'    => $key,

    'Body'   => "Hello Cloud Gurus!"

));


//HTML to create our webpage

echo "<h2 align=\"center\">File - $key has been successfully uploaded to $bucket</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<div align = \"center\"><a href=\"readfile.php?bucket=$bucket&key=$key\">Click Here To Read Your File</a></div>";

?>


[root@ip-172-31-80-161 s3]# vi readfile.php 


<?php

//connection string

include 'connecttoaws.php';


//code to get our bucket and key names

$bucket = $_GET["bucket"];

$key = $_GET["key"];


//code to read the file on S3

$result = $client->getObject(array(

    'Bucket' => $bucket,

    'Key'    => $key

));

$data = $result['Body'];


//HTML to create our webpage

echo "<h2 align=\"center\">The Bucket is $bucket</h2>";

echo "<h2 align=\"center\">The Object's name is $key</h2>";

echo "<h2 align=\"center\">The Data in the object is $data</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<div align = \"center\"><a href=\"cleanup.php?bucket=$bucket&key=$key\">Click Here To Remove Files & Bucket</a></div>";

?>

                      

[root@ip-172-31-80-161 s3]# vi cleanup.php 


<?php

//Connection String

include'connecttoaws.php';


//Code to get our bucketname and file name

$bucket = $_GET["bucket"];

$key = $_GET["key"];


//buckets cannot be deleted unless they are empty

//Code to delete our object

$result = $client->deleteObject(array(

    'Bucket' => $bucket,

    'Key'    => $key

));


//code to tell user the file has been deleted.

echo "<h2 align=\"center\">Object $key successfully deleted.</h2>";


//Code to delete our bucket

$result = $client->deleteBucket(array(

    'Bucket' => $bucket

));


//code to create our webpage.

echo "<h2 align=\"center\">Bucket $bucket successfully deleted.</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<h2 align=\"center\">Good Bye Cloud Gurus!</h2>";

?>


http://54.89.219.112/s3/createbucket.php




acloudguru.... buckets are created in my S3. 

Click on the Link.





cloudguru.txt file has been uploaded to the bucket in S3.

Click on the Link.




Click on the Link.







The bucket has been removed.










* Instance Metadata and User Data


curl http://169.254.169.254/latest/meta-data/ (*****)


How to get the public IP address (Exam *****)


[root@ip-172-31-80-161 s3]# curl http://169.254.169.254/latest/meta-data/

ami-id

ami-launch-index

ami-manifest-path

block-device-mapping/

hostname

iam/

instance-action

instance-id

instance-type

local-hostname

local-ipv4

mac

metrics/

network/

placement/

profile

public-hostname

public-ipv4

public-keys/

reservation-id

security-groups

[root@ip-172-31-80-161 s3]# curl http://169.254.169.254/latest/meta-data/public-ipv4

54.89.219.112[root@ip-172-31-80-161 s3]# 


[root@ip-172-31-80-161 s3]# yum install httpd php php-mysql


[root@ip-172-31-80-161 s3]# service httpd start

Starting httpd: 

[root@ip-172-31-80-161 s3]# yum install git


[root@ip-172-31-80-161 s3]# cd /var/www/html

[root@ip-172-31-80-161 html]# git clone https://github.com/acloudguru/metadata

Cloning into 'metadata'...

remote: Counting objects: 9, done.

remote: Total 9 (delta 0), reused 0 (delta 0), pack-reused 9

Unpacking objects: 100% (9/9), done.



[root@ip-172-31-80-161 s3]# cd /var/www/html

[root@ip-172-31-80-161 html]# git clone https://github.com/acloudguru/metadata

Cloning into 'metadata'...

remote: Counting objects: 9, done.

remote: Total 9 (delta 0), reused 0 (delta 0), pack-reused 9

Unpacking objects: 100% (9/9), done.

[root@ip-172-31-80-161 html]# ls -l

total 1848

-rw-r--r-- 1 root root      62 Oct 13 00:04 composer.json

-rw-r--r-- 1 root root   12973 Oct 13 00:04 composer.lock

-rwxr-xr-x 1 root root 1852323 Oct 13 00:04 composer.phar

drwxr-xr-x 3 root root    4096 Oct 13 00:34 metadata

drwxr-xr-x 3 root root    4096 Oct 13 00:15 s3

-rw-r--r-- 1 root root      19 Oct 12 23:52 test.php

drwxr-xr-x 8 root root    4096 Oct 13 00:08 vendor

[root@ip-172-31-80-161 html]# cd metadata

[root@ip-172-31-80-161 metadata]# ls -l

total 8

-rw-r--r-- 1 root root 676 Oct 13 00:34 curlexample.php

-rw-r--r-- 1 root root  11 Oct 13 00:34 README.md

[root@ip-172-31-80-161 metadata]# vi curlexample.php


<?php

        // create curl resource

        $ch = curl_init();

        $publicip = "http://169.254.169.254/latest/meta-data/public-ipv4";


        // set url

        curl_setopt($ch, CURLOPT_URL, "$publicip");


        //return the transfer as a string

        curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);


        // $output contains the output string

        $output = curl_exec($ch);


        // close curl resource to free up system resources

        curl_close($ch);


        // close curl resource to free up system resources

        curl_close($ch);


        //Get the public IP address

        echo "The public IP address for your EC2 instance is $output";

?>


Open a Web Browser
http://54.89.219.112/metadata/curlexample.php





반응형

[AWS Certificate] Developer - AWS CLI memo

2017. 10. 12. 09:19 | Posted by 솔웅


반응형


AWS Command Line Interface



- Getting Started

- CLI Reference

- GitHub Project

- Community Forum






* Terminate Instance from Terminal


1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@IP Address -i EC2KeyPair.pem.txt 

The authenticity of host 'IP address (IP address)' can't be established.

ECDSA key fingerprint is SHA256:..........

Are you sure you want to continue connecting (yes/no)? yes 

Warning: Permanently added '54.175.217.183' (ECDSA) to the list of known hosts.


       __|  __|_  )

       _|  (     /   Amazon Linux AMI

      ___|\___|___|


https://aws.amazon.com/amazon-linux-ami/2017.09-release-notes/

No packages needed for security; 1 packages available

Run "sudo yum update" to apply all updates.

[ec2-user@ip-172-31-89-170 ~]$ sudo su

[root@ip-172-31-89-170 ec2-user]# aws s3 ls

Unable to locate credentials. You can configure credentials by running "aws configure".


==> can't access to s3. 


[root@ip-172-31-89-170 ec2-user]# aws configure

AWS Access Key ID [None]: 


==> Open CSV file you downloaded and enter AWS Access Key ID and AWS Secret.

==> Enter region name


==> type aws s3 ls ==> will display list

==> aws s3 help ==> display all commands



cd ~ -> home directory

ls

cd .aws

ls

nano credentials ==> Access_key_id , secret_access_key



aws ec2 describe-instances ==> display all instances in JSON format


copy instance id of running instance


aws ec2 terminate-instances --instance-ids 'instance id'


==> terminated


When access_key_id and secret_access_key is accidently open to public -> Resolution is Delete the user and re-create it





* Using Role instead of Access Key



------ Identity Access Management Roles Lab -------



- IAM - Create a Role   with S3 full access policy

*** All Roles are for Global (*******) - No need to select Region


- Create EC2 Instance : Assign above role to this instance

==> You can replace Role of existing instance

: Actions - Instance Settings - Attach/Replace IAM role


now was s3 ls works


[root@ip-172-31-81-181 ec2-user]# aws s3 ls

[root@ip-172-31-81-181 ec2-user]#


CLI Commands - Developer Associate Exam


[ec2-user@ip-172-31-81-181 ~]$ sudo su

[root@ip-172-31-81-181 ec2-user]# aws s3 ls

[root@ip-172-31-81-181 ec2-user]# cd ~

[root@ip-172-31-81-181 ~]# ls

[root@ip-172-31-81-181 ~]# cd .aws

bash: cd: .aws: No such file or directory

[root@ip-172-31-81-181 ~]# aws configure

AWS Access Key ID [None]: 

AWS Secret Access Key [None]: 

Default region name [None]: us-east-1

Default output format [None]: 

[root@ip-172-31-81-181 ~]# cd .aws

[root@ip-172-31-81-181 .aws]# ls

config

[root@ip-172-31-81-181 .aws]# cat config

[default]

region = us-east-1

[root@ip-172-31-81-181 .aws]# 


==> can access aws without Access Key ID

Terminate the instance from Terminal

[root@ip-172-31-81-181 .aws]# aws ec2 terminate-instances --instance-ids i-0575b748b9ec9e3fa

{

    "TerminatingInstances": [

        {

            "InstanceId": "i-0575b748b9ec9e3fa", 

            "CurrentState": {

                "Code": 32, 

                "Name": "shutting-down"

            }, 

            "PreviousState": {

                "Code": 16, 

                "Name": "running"

            }

        }

    ]

}

[root@ip-172-31-81-181 .aws]# 

Broadcast message from root@ip-172-31-81-181

(unknown) at 0:02 ...


The system is going down for power off NOW!

Connection to 52.70.118.204 closed by remote host.

Connection to 52.70.118.204 closed.

1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ 





==> Shutting down the instance in AWS console





============ CLI Commands For The Developer Exam


IAM - Create a Role - MyEC2Role - administrator access

Launch new instance - assign the role


ssh ec2-user.......

sudo su

aws configure (enter region only)


docs.aws.amazon.com/cli/latest/reference/ec2/index.html


(*****)

aws ec2 describe-instances

aws ec2 describe-images  - enter image id

aws ec2 run-instances

was ec2 start-instances

(*****)



Do not confuse START-INSTANCES with RUN-INSTANCES

START-INSTANCES - START AND STOP INSTANCE

RUN-INSTANCES - CREATE A NEW INSTANCE


========================

-----S3 CLI & REGIONS


Launch new instance

Create S3 buckets (3 buckets)


Upload a file to one of above bucket

go to another bucket and upload other file.

go to another bucket and upload other file.


IAM - Create a new Role (S3 full access)


EC2 - public IP address


terminal

ssh ec2-user@....

sudo su

aws s3 ls - will not work

attach the role to the EC2 instance

attach/replace IAM role (WEB)

go back to terminal

aws s3 ls -> will display


aws s3 cp --recursive s3://bucket1_name /home/bucket2_name

ls


copy file to bucket





반응형


반응형

* Security Group


- Virtual Firewall

- 1 instance can have multiple security groups



chkconfig httpd on - Apache will turn on when reboot automatically


EC2 - Left Menu - Security Group - Select WebDMZ



Inbound Rules - Delete HTTP rule -> can not access to public IP http://34.228.166.148

*****


Outbound - All traffics - Delete -> can access to public IP address


Edit Inbound Rule -> automatically Edit Outbound Rule


Actions -> Networking - Change Security Group -> can select multiple security group


Tip

- All Inbound Traffic is Blocked By Default

- All Outbound Traffic is Allowed

- Changes to Security Groups take effect immediately

- You can have any number of EC2 instances within a security group.

- you can have multiple security groups attached to EC2 Instances

- Security Groups are STATEFUL (*****) (whereas network access control Lists - Stateless)

  : If you create an inbound rule allowing traffic in, that traffic is automatically allowed back out again

  : You cannot block specific IP addresses using Security Groups, instead use Network Access Control Lists (VPC section)

  

- You can specify allow rules but not deny rules. (*****)






* Upgrading EBS Volume Types 1


Magnetic Storage


lsblk

[root@ip-172-31-19-244 ec2-user]# lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvda    202:0    0   8G  0 disk 

└─xvda1 202:1    0   8G  0 part /

xvdb    202:16   0   8G  0 disk 

[root@ip-172-31-19-244 ec2-user]# mkfs -t ext4 /dev/xvdb

mke2fs 1.42.12 (29-Aug-2014)

Creating filesystem with 2097152 4k blocks and 524288 inodes

Filesystem UUID: 1a4f0040-89b5-4ac0-8345-15ceb7c868fb

Superblock backups stored on blocks: 

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632


Allocating group tables: done                            

Writing inode tables: done                            

Creating journal (32768 blocks): done

Writing superblocks and filesystem accounting information: done 


[root@ip-172-31-19-244 ec2-user]# mkdir /changsoopark

[root@ip-172-31-19-244 ec2-user]# mount /dev/xvdb /changsoopark

[root@ip-172-31-19-244 ec2-user]# lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvda    202:0    0   8G  0 disk 

└─xvda1 202:1    0   8G  0 part /

xvdb    202:16   0   8G  0 disk /changsoopark

[root@ip-172-31-19-244 ec2-user]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 16

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

[root@ip-172-31-19-244 changsoopark]# nano test.html

[root@ip-172-31-19-244 changsoopark]# ls -l

total 20

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

-rw-r--r-- 1 root root    19 Oct  5 00:07 test.html

[root@ip-172-31-19-244 changsoopark]# 


unmount the volume - umount /dev/xvdb


[root@ip-172-31-19-244 /]# cd /

[root@ip-172-31-19-244 /]# umount /dev/xvdb

[root@ip-172-31-19-244 /]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 0

[root@ip-172-31-19-244 changsoopark]# 


mount it again and check the folder


[root@ip-172-31-19-244 changsoopark]# cd /

[root@ip-172-31-19-244 /]# mount /dev/xvdb /changsoopark

[root@ip-172-31-19-244 /]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 20

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

-rw-r--r-- 1 root root    19 Oct  5 00:07 test.html

[root@ip-172-31-19-244 changsoopark]# 


unmount it again


[root@ip-172-31-19-244 changsoopark]# cd /

[root@ip-172-31-19-244 /]# umount /dev/xvdb

[root@ip-172-31-19-244 /]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 0

[root@ip-172-31-19-244 changsoopark]# 


- aws.amazon.com : Detach Volume  and Create Snapshot - Create Volume : Select Volume Type


Attach Volume - Select instance and Attach button --> Go to Console


[root@ip-172-31-19-244 changsoopark]# lsblk

NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

xvda    202:0    0   8G  0 disk 

└─xvda1 202:1    0   8G  0 part /

xvdf    202:80   0   8G  0 disk               ====> New Volume (partition)

[root@ip-172-31-19-244 changsoopark]# file -s /dev/xvdf

/dev/xvdf: Linux rev 1.0 ext4 filesystem data, UUID=1a4f0040-89b5-4ac0-8345-15ceb7c868fb (extents) (large files) (huge files)

[root@ip-172-31-19-244 changsoopark]# mount /dev/xvdf /changsoopark

[root@ip-172-31-19-244 changsoopark]# cd /changsoopark

[root@ip-172-31-19-244 changsoopark]# ls -l

total 20

drwx------ 2 root root 16384 Oct  5 00:05 lost+found

-rw-r--r-- 1 root root    19 Oct  5 00:07 test.html







Create Volume, Set MySql to volume, mount, unmount, attach, detach, snapshot, remount 

Steps can be in exam





* Upgrading EBS Volume Types 2



Delete Instance - Delete volume and Delete Snapshot seperatly


Exam Tips

- EBS Volumes can be changed on the fly (except for magnetic standard)

- Best practice to stop the EC2 instance and then change the volume

- You can change volume types by taking a snapshot and then using the snapshot to create a new volume

- If you change a volume on the fly you must wait for 6 hours before making another change

- You can scale EBS Volumes up only

- Volumes must be in the same AZ as the EC2 instances






* EFS (Elastic File System) Lab



What is EFS


Amazon Elastic File System (Amazon EFS) is a file storage service for Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon EFS is easy to use and provides a simple interface that allows you to create and configure file systems quickly and easily. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it.


- Supports the Network File System version 4 (NFSv4) protocol

- You only pay for the storage you use (no pre-provisioning required)

- Can scale up to the petabytes

- Can support thousands of concurrent NFS connections

- Data is stored across multiple AZ's within a region

- Read After Write Consistency

- EFS block based storage vs. S3 object based storage






aws.amazon.com

EFS - Create File system - Configure file system access

: VPC - An Amazon EFS file system is accessed by EC2 instances running inside one of your VPCs. Instances connect to a file system by using a network interface called a mount target. Each mount target has an IP address, which we assign automatically or you can specify.


: Create mount targets - Instances connect to a file system by using mount targets you create. We recommend creating a mount target in each of your VPC's Availability Zones so that EC2 instances across your VPC can access the file system.

==> AZ, Subnet, IP address, Security groups

- Tag and Create FIle System -> Done


Create New Instance

Step 1, Step 2 - Default

Step 3 - Default except Subnet => Select the created Submet when Create EFS above

Step 4 - Add Storage


Create another Instance - Select Load Balancer


VPS, Subnet


Define Load Balancer


==> Check EFS







반응형


반응형

Copy from alexa.design/standout

PDF file : 

GuideStandoutSkillFinal.pdf








Seven Qualities of Top-Performing Alexa Skills



Browse through the Alexa Skills Store, and you’ll see the innovations of our developer community on display. Our public catalog features more than 25,000 skills that enable a rich variety of scenarios including hands-free smart device control, on-demand content delivery, immersive adventure games, and more. You’ve created natural and engaging voice experiences that are delighting customer. And you’ve pushed the boundaries of what’s possible with voice to redefine how your customers interact with technology.



Now that you can earn money for eligible skills that drive the highest customer engagement, we know engagement is top of mind for many of you. To help you maximize the impact of your work, we analyzed our skill selection from the customers’ perspective. What makes a skill engaging for customers? And what keeps customers coming back over time? To find out, we examined the skills that see the highest consistent customer engagement. And we learned that these top performers share seven common qualities:



1. The skill makes a task faster and easier with voice 

2. The skill has an intuitive and memorable name
3. The skill sets clear expectations on what it can do 

4. The skill minimizes friction

5. The skill surprises and delights customers 

6. The skill delivers fresh content
7. The skill is consistently reliable



In this guide, we will dive deeper into each quality and provide guidance on how you can incorporate it into your skill. We will also share exemplary skills that you can explore and model after. Leverage these insights to build standout skills that your customers will love.





1 The Skill Makes a Task Faster and 


Easier with Voice



When designing a new skill, make sure it has a clear customer benefit. Your skill should make a task faster and easier with voice. The skill should offer a more convenient user experience than existing methods, be it a light switch or a smartphone app.



Smart home skills, especially those that control multiple smart devices, are a great example of an existing experience made better with voice. They take a known workflow that involves multiple applications and simplify the steps into a single voice command, making the tasks both faster and easier. These skills offer a clear value to the customer.



When choosing your voice project, start with the purpose, or what customers want to accomplish. Then determine the capabilities of your skill and the benefits of using the skill over other options. Make sure your skill has a clear purpose before you start building. Skills that seamlessly integrate into a customer’s routine and provide value are especially popular.



Customers love The Dog Feeder skill because it helps simplify a daily task. Customers simply say, 

“Alexa, ask the dog if we fed her,” and Alexa shares when the dog last ate, giving families an easy way to manage a shared task. The skill addresses a need in the customers’ daily routine and provides value.



If you’re adapting an existing experience for voice, take a voice-first approach to designing and building your skill. In other words, avoid taking a visual experience or an app-first experience and simply adding voice to it. Instead, reimagine the interaction and figure out how to make it faster and easier with voice. Unless you offer an option that is twice as easy as what’s already available, customers don’t have an incentive to leave the UX they already know and adopt a new habit.





2 The Skill Has an Intuitive and 


Memorable Name



Once you’ve determined your skill’s purpose, give it a great name. Your skill’s name should help customers easily discover, understand, and remember your skill. If your skill name is longer and more difficult to say than a similar skill, you’ll risk losing customers—even if your skill offers more functionality. Remember, customers prefer voice because it’s our most natural form of interaction. So be sure to give your skill a name that’s natural to say and easy to grasp.



Take, for example, Daily Affirmation. The skill provides a new uplifting thought every day—just as the name suggests. For skills that deliver fresh content, specifying how often you’ll update the content tells the customers when to come back for more.

 

Even skills with more complex customer offerings can have a simple and memorable name.

The Magic Door is an interactive adventure game that takes customers through a magic door and into an enchanted forest. The name hints at many aspects of this sophisticated skill and is also easy to remember.


Once you’ve got an idea for your skill’s name, say the invocation name out loud, just as a customer would. See if it’s intuitive and easy to say. Let’s take the example of the Sleep and Relaxation Sounds skill. The customer will say something like:



You can see that the invocation name speaks to the value of the skill, flows within the context, and will be easy to remember at bedtime.


Beta testers (or even friends or colleagues) can also help grade the strength of your skill’s name. Ask them what they expect the skill to do based on the name alone. Use their responses to determine whether your skill name clearly articulates your skill’s capabilities and value. After your skill is published, read the customer reviews to identify any gaps between the skill name and the skill experience.



3 The Skill Sets Clear Expectations on 


What It Can Do



When customers first invoke your skill, aim to provide just the right amount of information so customers know how to move forward. Provide not enough information, and customers won’t know what to do. Provide too much, and customers will get overwhelmed and leave. Finding the right balance is key to enabling your customers to seamlessly interact with your skill.



Then, when your users come back for a second visit, offer a different, abbreviated welcome. Since you’ve already introduced yourself, you can dive right in and pick up where you left off, just like you would with another person. When we talk to each other, our first conversation and our tenth conversation are quite different. That’s because we grow more familiar with each other, and our conversations gain context from previous talks. The same should hold true for your skill’s interaction with your customers.



For every interaction, keep Alexa’s responses concise so that your users stay engaged and can easily follow along. Put your skill’s responses to the one-breath test. Read aloud what you’ve written at a conversational pace. If you can say it all in one breath, the length is likely good. If you need to take a breath, consider reducing the length. For a response that includes successive ideas such as steps in a task, read each idea separately. While the entire response may require more than one breath, make sure your response requires breaths between, not during, ideas.



Once you’ve designed your skill, test your skill to make sure it works as you intended. Watch beta testers and customers try to use your skill and see whether you’ve presented the right amount of information to successfully guide them through the interaction.



 Learn more: Voice Design Guide: What Alexa Says

Try: Set Clear Expectations Using Our Code Sample





4 The Skill Minimizes Friction


As you add capabilities to your skill, make sure you don’t introduce unnecessary pain points or friction. Think through the entire interaction flow, and ensure your customers will know how to navigate from one step to the next. Remove any ambiguity that may hinder your customers from moving forward and getting what they’re looking for.



One way to minimize friction is to only add account linking when you truly need it. Account linking provides authentication when you need to associate the identity of the Alexa user with a user in your system. It’s a useful way to collect information that is very difficult to accurately recognize via voice, like email addresses (which often contain homophones like “one” and “1”). But account linking can also introduce friction for customers when they enable a skill as it prevents the process from being completed seamlessly via voice. Therefore, it should only be used when necessary, specifically when the resulting customer value offsets the risk of friction.



If your skill simply needs to persist data between sessions, account linking is not strictly required. The userID attribute provided with the request will identify the same user across sessions unless the customer disables and re-enables your skill. 


Some information, like physical address, is now available via the permissions framework grows, account-linking flows should be limited to authentication scenarios only and not personalization. If you use account linking in your skill, be sure to follow best practices to minimize friction and ensure a smooth customer experience.



Learn more : 10 Tips for Successfully Adding Account Linking to Your Alexa Skill




5 The Skill Surprises and Delights 


Customers


In mobile and web design, it’s important to provide a consistent customer experience every time. Layout, color schemes, and names always stay the same so users don’t have to relearn the UI with each visit. But with voice, it’s important to have variety. People may not mind scanning the same web page times over, but no one wants to have the same conversation time and again.



You can introduce variety throughout your skill to keep the interaction fresh. Think of all the different ways Alexa can welcome your customers, or the many ways Alexa can say “OK” (think: “Got it,” “Thanks,” “Sounds good,” “Great,” and so on). You can use these opportunities to inject variety, color, and humor to your skill. You can even prepare clever responses to customers’ requests for features your skill doesn’t yet support. By seizing these opportunities, you can make your interactions feel more natural, conversational, and even memorable. 



You can also build engagement over time by remembering what your users were doing last.

Storing data in Amazon DynamoDB allows you to add this memory and context to your skill.

Persistence allows you to pause games or guide users through a step-by-step process like creating a recipe, tackling a DIY project, or a playing a game. For example, a game skill with memory enables customers to pause, come back, and pick up right where they left off.






6 The Skill Regularly Provides Fresh 



Content


As we’ve mentioned, customers expect variety in voice interactions. So it’s no surprise that skills that provide fresh content drive more regular usage over time. Fresh content gives customers a reason to return to your skill over time, and when they do, they are rewarded with something new.



This is especially true of flash briefing skills, which are built around the premise of delivering fresh content. When flash briefing skills don’t update as promised, customers tend to leave negative reviews.



However, the value of this quality doesn’t just apply to flash briefing skills; other skills should also get regular content updates. For example, fact skills and trivia skills that don’t evolve over time to offer new facts or questions don’t tend to see consistent engagement. Users may love the experi- ence you’ve built, but if your skill never evolves beyond a set of limited choices, they won’t have reason to keep coming back.


The Jeopardy! skill is a model example of a skill that entices customers with fresh content. The skill serves up six new clues every weekday, giving fans reason to return five times a week.

When building your skill, establish a content workflow that enables you to quickly and easily add new content to your skill. One way to do this is to house your content in a database instead of hardcoding it into your skill to enable fast updates. Once you’ve set up a workflow, adhere to a schedule to make continued updates to your skill. Find ways to add fresh content and continue delighting your customers over time.



Try: Keep Your Customers Engaged with Dynamic Content





7 The Skill Is Consistently Reliable

Even the most compelling and delightful voice experience won’t gain traction if it isn’t available whenever customers ask. To ensure your skill is consistently reliable, configure a professional-grade backend for your skill.


Amazon Web Services offers several solutions that will help you improve the user experience and ensure dependability of your skill as it gains users and handles more intricate content. Try Amazon CloudFront to cache dynamic content and files that require heavy-lifting. This will improve your response time and provide better deliverability.

 

If you’ve built a top-notch skill, it will likely get noticed and highlighted in the Alexa Skills Store. So be sure your backend can support your skill’s moment in the spotlight. Your backend should be able to scale properly to ensure high availability during high-traffic scenarios. If you’re using Amazon DynamoDB, set your tables capacity for reads and writes per second to be much higher than your expected peak throughput. If your skill launches multiple AWS Lambda functions per skill invocation, check to see whether you are nearing the limits for function invocations. If you’re getting close, you can request a limit increase to ensure scalability. To set alarms for unforeseen scenarios, you can use Lambda’s built-in functionality to output logs to Amazon CloudWatch and trigger alarms based on the vents in those logs. 

 


 Once your skill is live, you can use Amazon QuickSight to visualize analytics you track in Amazon Redshift. You can see how your skill is performing, fix user experiences that don't resonate, and double down on what works to make your skill even more impactful.


AWS Promo Credits: If you incur AWS charges related to your skill, you can apply for AWS promotional credits. Once you’ve published a skill, apply to receive a $100 AWS promotional credit and an additional $100 per month in credit.

Apply now. 



Learn more: 5 Ways to Level Up Your Skill with AWS




Build Engaging Skills Your Customers Will Love


Whether you’re building a new skill or upgrading an existing skill, follow these tips to put your best skill forward. By building engaging voice experiences, you can reach and delight customers through tens of millions of devices with Alexa. And you can also enrich your skills over time to grow your expertise in voice design and evolve from a hobbyist to a professional.



It also pays to build highly engaging skills. Every month, developers can earn money for eligible skills that drive the highest customer engagement in seven eligible skill categories. 


Learn more and start building your next skill today.

 






Alexa Skills Kit


The Alexa Skills Kit is a collection of self-service APIs, tools, documentation, and code samples that makes it fast and easy for you to add skills to Alexa. With ASK, you can leverage Amazon’s knowledge and pioneering work in the field of voice design.



Additional Resources


Voice Design Guide

Documentation

Shortcut Start a Skill Now








반응형


반응형

Today I am going to create my Amazon EC2 instance (Amazon Linux), install Apache web server in the instance and create my public web pate.


You can create your own as well. just follow the steps below.


Refer to A Cloud Guru A Certified Developer - Associate lectures for more details.



[AWS Certificate] 로 시작하는 글들은 제가 AWS Certified Developer - Associate  을 준비하면서 배운 내용들을 메모해 두는 글입니다.

이번 글은 EC2 instance 와 어디서나 접근할 수 있는 나의 웹 페이지를 만드는 방법을 정리했습니다.

따라하시면 무료로 리눅스 서버와 개인 홈페이지 공간을 얻을 수 있습니다.




- Navigate to EC2 page. https://console.aws.amazon.com/ec2 And Click on Launch Instance button 





- Select AMI (Amazon Machine Image) as Amazon Linux




Amazon Machine Image


An Amazon Machine Image (AMI) is a special type of virtual appliance that is used to create a virtual machine within the Amazon Elastic Compute Cloud ("EC2"). It serves as the basic unit of deployment for services delivered using EC2.


Amazon Machine Images (AMI)

An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.

An AMI includes the following:

  • A template for the root volume for the instance (for example, an operating system, an application server, and applications)

  • Launch permissions that control which AWS accounts can use the AMI to launch instances

  • A block device mapping that specifies the volumes to attach to the instance when it's launched



- Select the default t2.micro  and Click on Next:Configure Instance Details button


Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload.
























- Set Defaults and Click on Next: Add Storage button



Subnet : 1 Subnet is always equal to 1 Availability (******) Exam


Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and enable new types of cloud computing applications.

There is no Spot capacity for instance type t2.micro in availability zone

VPCs and Subnets

To get started with Amazon Virtual Private Cloud (Amazon VPC), you create a VPC and subnets. For a general overview of Amazon VPC, see What is Amazon VPC?.


VPC and Subnet Basics

A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC.

When you create a VPC, you must specify a range of IPv4 addresses for the VPC in the form of a Classless Inter-Domain Routing (CIDR) block; for example, 10.0.0.0/16. This is the primary CIDR block for your VPC. For more information about CIDR notation, see RFC 4632.

























































- Set as default and Click on Next: Add Tags button



You can Add Amazon EBS Volume Types here.


Amazon EBS Volume Types

Amazon EBS provides the following volume types, which differ in performance characteristics and price, so that you can tailor your storage performance and cost to the needs of your applications. The volumes types fall into two categories:

  • SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS

  • HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS




- Add Tags as much as you need and Click on Next: Configure Security Group button







- Enter Security group Name and Description

- Add HTTP and HTTPS Types

- Click on Review and Launch Button



Security Groups for Your VPC

security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet in your VPC could be assigned to a different set of security groups. If you don't specify a particular group at launch time, the instance is automatically assigned to the default security group for the VPC.

For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic. This section describes the basic things you need to know about security groups for your VPC and their rules.

You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC. For more information about the differences between security groups and network ACLs, see Comparison of Security Groups and Network ACLs.




- Review your configurations and Click on Launch button




- Select 'Create a new key pair' in dropdown menu

- Enter Name the Key pair name

- Click on Download Key Pair

- Click on Launch Instance




- Click on View Instance button




- Now your instance is running



You can see your instance details here.







Not I am going to access to my instance and create my web page.

Open your Terminal (Mac) or Console window (Windows).

and Navigate to the folder where the downloaded key pare file is.




The EC2KeyPair.pem.txt is the one I downloaded now.

MyEC2KeyPair.pem.txt is old one what I've used.


change permission of EC2KeyPair.pem.txt file


CHMOD 400 EC2KeyPair.pem.txt 




Type ssh ec2-user@'your IPv4 Public IP' -I EC2KeyPair.pem.txt

Type yes

and then you can log in to your Amazon Linux Instance


Type sudo su 

You are now with super user permission.




Type yum update -y to update Operation System




Type yum install httpd -y to install Apache Server



navigate to Web root page


cd /var/www/html



There is no file in the folder now.


I am going to my web page now.


Type nano index.html (or vi index.html)


I have created the web page as below to display my blog.


<html>

<h1> iframe - Changsoo's Blog - </h1>


<iframe id="blog"

    title="Changsoo's Blog"

    width="100%"

    height="100%"

    src="http://coronasdk.tistory.com">

</iframe>    

</html>



Now I can see the index.html file in the folder.

I will start my Apache server.


service http start




Now enter 34.228.166.148 in URL bar in your browser then you can see the page below.






You can type my Public DNS (IPv4) to get the page in your browser as well.


http://ec2-34-228-166-148.compute-1.amazonaws.com/




Now I have my Amazon Linux server (EC2 instance) and public web page.






Termination Protection is turned off by default, you must turn it on.


If you want to terminate the instance then


1. Action - Instance Settings - Change Termination Protection



2. Click on Yes, Enable button.




3. Actions - Instance State - Terminate




On an EBS-backed instance, the default action is for the root EBS volume to be deleted when the instance is terminated.

EBS Root Volumes of your DEFAULT AMI's cannot be encrypted.

You can also use a third party tool (such as bit locker etc.) to encrypt the root volume, or this can be done when creating AMI's (lab to follow) in the AWS console or using the API.



반응형


반응형

EC2 (Elastic Compute Cloud)



What is EC2?


Provides resizable compute capacity in the Cloud Designed to make web-scale cloud computing easier A true virtual computing environment Launch instances with a variety of operating systems Run as many or few systems as you desire.




Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.

Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change. Amazon EC2 changes the economics of computing by allowing you to pay only for capacity that you actually use. Amazon EC2 provides developers the tools to build failure resilient applications and isolate them from common failure scenarios.


* EC2 Options (***)


On Demand Instances - Pay for compute capacity by the hour with no long-term commitments or upfront payments

With On-Demand instances, you pay for compute capacity by the hour with no long-term commitments or upfront payments. You can increase or decrease your compute capacity depending on the demands of your application and only pay the specified hourly rate for the instances you use. 

On-Demand instances are recommended for:

  • Users that prefer the low cost and flexibility of Amazon EC2 without any up-front payment or long-term commitment
  • Applications with short-term, spiky, or unpredictable workloads that cannot be interrupted
  • Applications being developed or tested on Amazon EC2 for the first time


Reserved Instances- Provide you with a significant discount (up to 75%) compared to On-Demand Instance pricing

Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.

For applications that have steady state or predictable usage, Reserved Instances can provide significant savings compared to using On-Demand instances. See How to Purchase Reserved Instances for more information.

Reserved Instances are recommended for:

  • Applications with steady state usage
  • Applications that may require reserved capacity
  • Customers that can commit to using EC2 over a 1 or 3 year term to reduce their total computing costs

Spot Instances - Purchase compute capacity with no upfront commitment and at hourly rates usually lower than the On-Demand rate

Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity for up to 90% off the On-Demand price. Learn More.

Spot instances are recommended for:

  • Applications that have flexible start and end times
  • Applications that are only feasible at very low compute prices
  • Users with urgent computing needs for large amounts of additional capacity
- Remember with spot instances;
: If you terminate the instance, you pay for the hour
: If AWS terminates the spot instance, you get the hour it was terminated in for free


Dedicated Hosts Instances


A Dedicated Host is a physical EC2 server dedicated for your use. Dedicated Hosts can help you reduce costs by allowing you to use your existing server-bound software licenses, including Windows Server, SQL Server, and SUSE Linux Enterprise Server (subject to your license terms), and can also help you meet compliance requirements. Learn more.

  • Can be purchased On-Demand (hourly).
  • Can be purchased as a Reservation for up to 70% off the On-Demand price.


* EC2 Instance Types (*****)


- General Purpose

T2 : Low Cost EC2 Instances with Burstable Performance.

      T2 instances are Burstable Performance Instances that provide a baseline level of CPU performance with the ability to burst above the baseline. The baseline performance and ability to burst are governed by CPU Credits. Each T2 instance receives CPU Credits continuously at a set rate depending on the instance size.  T2 instances accrue CPU Credits when they are idle, and use CPU credits when they are active.  T2 instances are a good choice for workloads that don’t use the full CPU often or consistently, but occasionally need to burst (e.g. web servers, developer environments and databases). For more information see Burstable Performance Instances.


M4M4 instances are the latest generation of General Purpose Instances. This family provides a balance of compute, memory, and network resources, and it is a good choice for many applications.


M3This family includes the M3 instance types and provides a balance of compute, memory, and network resources, and it is a good choice for many applications.


- Compute Optimized

C4 : Highest Compute Performance on Amazon EC2.

       C4 instances are the latest generation of Compute-optimized instances, featuring the highest performing processors and the lowest price/compute performance in EC2.


C3 Features:

  • High Frequency Intel Xeon E5-2680 v2 (Ivy Bridge) Processors
  • Support for Enhanced Networking
  • Support for clustering
  • SSD-backed instance storage


- Memory Optimized


X1X1 Instances are optimized for large-scale, enterprise-class, in-memory applications and have the lowest price per GiB of RAM among Amazon EC2 instance types.


R4R4 instances are optimized for memory-intensive applications and offer better price per GiB of RAM than R3.


R3R3 instances are optimized for memory-intensive applications and offer lower price per GiB of RAM.


- Accelerated Computing


P2P2 instances are intended for general-purpose GPU compute applications. 


G3G3 instances are optimized for graphics-intensive applications.


F1F1 instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs).



- Storage Optimized


I3 : High I/O Instances

This family includes the High Storage Instances that provide Non-Volatile Memory Express (NVMe) SSD backed instance storage optimized for low latency, very high random I/O performance, high sequential read throughput and provide high IOPS at a low cost.


D2D2 instances feature up to 48 TB of HDD-based local storage, deliver high disk throughput, and offer the lowest price per disk throughput performance on Amazon EC2.






Prerequisite concept


What is EBS?


Amazon Elastic Block Store (EBS)


Amazon Elastic Block Store is an AWS block storage system that is best used for storing persistent data. Often incorrectly referred to as Elastic Block Storage, Amazon EBS provides highly available block level storage volumes for use with Amazon EC2 instances.



* Amazon EBS Volume Types


- General Purpose SSD (GP2)

- Provisioned IOPS SSD (IO1)

- Throughput Optimized HDD (ST1)

- Cold HDD (SC1)

- Magnetic (Standard) : can boot OS, Lowest cost per gigabyte




- EBS Consists of;

: SSD, General Purpose - GP2 - (Up to 10,000 IOPS)

: SSD, Provisioned IOPS - I01 - (More than 10,000 IOPS)

: HDD, Throughput Optimized - ST1 - frequently accessed workloads

: HDD, Cold - SC1 - less frequently accessed data.

: HDD, Magnetic - Standard - cheap, infrequently accessed storage


- You cannot mount 1 EBS volume to multiple EC2 instances, instead use EFS.


* IOPS 


Input/output operations per second (IOPS, pronounced eye-ops) is an input/output performance measurement used to characterize computer storage devices like hard disk drives (HDD), solid state drives (SSD), and storage area networks (SAN). Frequently mischaracterized as a 'benchmark', IOPS numbers published by storage device manufacturers do not relate to real-world application performance.[1][2]


아이옵스(Input/Output Operations Per Second, IOPS)는 HDD, SSD, SAN 같은 컴퓨터 저장 장치를 벤치마크하는 데 사용되는 성능 측정 단위다. IOPS는 보통 인텔에서 제공하는 Iometer 같은 벤치마크 프로그램으로 측정된다.


IOPS 측정값은 벤치마크 프로그램에 따라 다르다. 구체적으로는 임의 접근과 순차 접근 여부, 벤치마크 프로그램의 쓰레드 갯수와 큐의 크기, 데이터 블록 크기, 읽기 명령과 쓰기 명령의 비중 등에 따라 달라지며, 이외에도 많은 변수들이 있다. 일반적으로는 종합 IOPS, 임의 접근 읽기(Random Access Read) IOPS, 임의 접근 쓰기(Random Access Write) IOPS, 순차 접근 읽기(Sequential Access Read) IOPS, 순차 접근 


* SSD 


반도체를 이용하여 정보를 저장하는 장치이다. 하드디스크드라이브에 비하여 속도가 빠르고 기계적 지연이나 실패율, 발열·소음도 적으며, 소형화·경량화할 수 있는 장점이 있다.

솔리드 스테이트 드라이브(Solid State Drive)의 영문 머리글자를 딴 약자이다. 하드 디스크 드라이브(HDD)와 비슷하게 동작하면서도 기계적 장치인 HDD와는 달리 반도체를 이용하여 정보를 저장한다. 임의접근을 하여 탐색시간 없이 고속으로 데이터를 입출력할 수 있으면서도 기계적 지연이나 실패율이 현저히 적다. 또 외부의 충격으로 데이터가 손상되지 않으며, 발열·소음 및 전력소모가 적고, 소형화·경량화할 수 있는 장점이 있다.

플래시 방식의 비휘발성 낸드플래시메모리(nand flash memory)나 램(RAM) 방식의 휘발성 DRAM을 사용한다. 플래시 방식은 RAM 방식에 비하면 느리지만 HDD보다는 속도가 빠르며, 비휘발성 메모리를 사용하여 갑자기 정전이 되더라도 데이터가 손상되지 않는다. 반면 DRAM 방식은 빠른 접근이 장점이지만 제품 규격이나 가격, 휘발성이라는 문제점이 있다. 따라서 데이터 저장과 안전성이 높은 플래시메모리 기반의 SSD를 주로 사용한다.

대용량 SSD가 개발되면서 노트북PC나 데스크톱PC에도 활용할 수 있게 되었다.

[네이버 지식백과] SSD [Solid State Drive] (두산백과)


AWS AMI (Amazon Machine Images)

An Amazon Machine Image (AMI) provides the information required to launch an instance, which is a virtual server in the cloud. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. You can also launch instances from as many different AMIs as you need.




* Instance Store vs. Amazon EBS


I’m not sure whether to store the data associated with my Amazon EC2 instance in instance store or in an attached Amazon Elastic Block Store (Amazon EBS) volume. Which option is best for me?

Some Amazon EC2 instance types come with a form of directly attached, block-device storage known as the instance store. The instance store is ideal for temporary storage, because the data stored in instance store volumes is not persistent through instance stops, terminations, or hardware failures. You can find more detailed information about the instance store at Amazon EC2 Instance Store.

For data you want to retain longer-term, or if you need to encrypt the data, we recommend using EBS volumes instead. EBS volumes preserve their data through instance stops and terminations, can be easily backed up with EBS snapshots, can be removed from instances and reattached to another, and support full-volume encryption. For more detailed information about EBS volumes, see Features of Amazon EBS.


* Instance Store 

Physically attached to the host computer

Type and amount differs by instance type

Data dependent upon instance lifecycle

Instance store data persists if:

- The OS in the instance is rebooted

- The instance is restarted


Instance store data is lost when:

- An underlying instance drive fails

- And EBS-backed instance is stopped

- The instance is terminated

Virtual Private Cloud

VPC Networking

Elastic Load Balance


* Amazon EBS


Persistent block level storage volumes

Magnetic

General Purpose(SSD)

Provisioned IOPS(SSD)

data independent of instance lifecycle





반응형