반응형
블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

카테고리


반응형

Coursera에서 제공하는 Exam Prep: AWS Certified Solutions Architect-Associate 코스를 다 끝내고 테스트 문제를 풀었다.

총 31문제 중 첫번째 시도에서는 64.51점을 맞췄다.

두번째 시도에서는 90.32점.

세번째 시도에서야 100점을 맞았다.

 

왜 맞았는지, 왜 틀렸는지 찬찬히 한번 더 살펴봐야겠다.

 

Try again once you are ready

Grade received 64.51%


2nd Try

Your grade

90.32

 

3rd try

Grade received 100%

 

To pass 80% or higher

 

 

Benchmark Assessment

 

1.

Question 1

A company's application allows users to upload image files to an Amazon S3 bucket. These files are accessed frequently for the first 30 days. After 30 days, these files are rarely accessed, but need to be durably stored and available immediately upon request. A solutions architect is tasked with configuring a lifecycle policy that minimizes the overall cost while meeting the application requirements. Which action will accomplish this?

4.1 Identify cost-effective storage solutions

1 / 1 point

 

Configure a lifecycle policy to move the files to S3 Glacier after 30 days.

 

Configure a lifecycle policy to move the files to S3 Glacier Deep Archive after 30 days.

 

Configure a lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

 

Configure a lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.

 

정답 3번

Correct

Correct. Using a lifecycle policy to move data to S3 Standard-IA satisfies all application requirements and provides the lowest-cost option. To learn more about S3 Standard-IA, see: Amazon S3 Storage Classes

Glacieravailable immediately라는 requirement를 충족할 수 없다..
S3 One Zone-IAto be durably stored 라는 requirement를 충족할 수 없다.

 

 

2.

Question 2

A company needs to implement a secure data encryption solution to meet regulatory requirements. The solution must provide security and durability in generating, storing, and controlling cryptographic data keys. Which action should be taken to provide the MOST secure solution?

3.3 Select appropriate data security options

1 / 1 point

 

Use AWS Key Management Service (AWS KMS) to generate AWS KMS keys and data keys. Use AWS KMS key policies to control access to the KMS keys.

 

Use AWS Key Management Service (AWS KMS) to generate cryptographic keys and import the keys to AWS Certificate Manager. Use IAM policies to control access to the keys.

 

Use a third-party solution from AWS Marketplace to generate the cryptographic keys and store them on encrypted instance store volumes. Use IAM policies to control access to the encryption key APIs.

 

Use OpenSSL to generate the cryptographic keys and upload the keys to an Amazon S3 bucket with encryption enabled. Apply AWS Key Management Service (AWS KMS) key policies to control access to the keys.

 

정답 1번

Correct

Correct. AWS KMS with customer controlled KMS keys meets all the requirements. To learn more about AWS KMS, see: AWS Key Management Service

AWS KMS를 사용하면 손쉽게 암호화 키를 생성 및 관리하고 다양한 AWS 서비스와 애플리케이션에서의 사용을 제어할 수 있다.
다른 보기들은 AWS KMS의 일체화된 서비스를 사용하는 것보다 secure 하지 않다.

 

 

3.

Question 3

A startup company is looking for a solution to cost-effectively run and access microservices without the operational overhead of managing infrastructure. The solution needs to be able scale quickly to accommodate rapid changes in the volume of requests and protect against common DDoS attacks. What is the MOST cost-effective solution that meets these requirements?

4.2 Identify cost-effective compute and database services

0 / 1 point

 

Run the microservices in containers using AWS Elastic Beanstalk.

 

Run the microservices in AWS Lambda behind an Amazon API Gateway.

 

Run the microservices on Amazon EC2 instances in an Auto Scaling group.

 

Run the microservices in containers using Amazon Elastic Container Service (Amazon ECS) backed by EC2 instances.

Incorrect

Incorrect. Amazon ECS is a highly scalable, fast, container management service that you can use to run, stop, and manage Docker containers on a cluster. However, you must manage the underlying EC2 instances unless you use AWS Fargate. Also, cluster scaling might not be fast enough to handle rapid changes in request volume. To learn more about Amazon ECS, see: What is Amazon Elastic Container Service?
ECSDDoS 공격도 관리할 줄 알았는데 아닌가 보다.
without operational overhead of managing infrastructure 가 있는거 보니 AWS Lambda 가 맞는것 같다.
Microservice는 소프트웨어가 잘 정의된API를 통해 통신하는 소규모의 독립적인 서비스로 구성되어 있는 경우의 소프트웨어 개발을 위한 아키텍처 및 조직적 접근 방식임. 독립적인 소규모 팀에서 보유 함.

정답 2두번째 시도에 맞춤
Correct

Correct. Lambda is a compute service that you can use to run code without provisioning or managing servers. Lambda runs code only when needed. It is a cost-effective solution because there is no charge for idle resources. To learn more about Lambda, see: What is AWS Lambda?

 

4.

Question 4

A solutions architect needs to design a secure environment for AWS resources that are being deployed to Amazon EC2 instances in a VPC. The solution should support for a three-tier architecture consisting of web servers, application servers, and a database cluster. The VPC needs to allow resources in the web tier to be accessible from the internet with only the HTTPS protocol. Which combination of actions would meet these requirements? (Select TWO.)

3.2 Design secure application tiers

1 / 1 point

 

Attach Amazon API Gateway to the VPC. Create private subnets for the web, application, and database tiers.
Private subnet을 적용하면 외부 접근이 안되서 WebPublic을 해야 함.

 

Attach an internet gateway to the VPC. Create public subnets for the web tier. Create private subnets for the application and database tiers.

Correct

Correct. Only the web tier needs to be in public subnets. The application and database tiers should be in private subnets. To learn more about internet gateways, public subnets, and private subnets, see: VPCs and subnets

 

Attach a virtual private gateway to the VPC. Create public subnets for the web and application tiers. Create private subnets for the database tier.
application tierpublic을 적용할 필요가 없음. 비지니스 로직이 외부에 노출 될 우려가 있음

 

Create a web server security group that allows all traffic from the internet. Create an application server security group that allows requests from only the Amazon API Gateway on the application port. Create a database cluster security group that allows TCP connections from the application security group on the database port only.
Web serverHTTPS 만 허용되어야 하기 때문에 requirement를 만족하지 못함.

 

Create a web server security group that allows HTTPS requests from the internet. Create an application server security group that allows requests from the web security group only. Create a database cluster security group that allows TCP connections from the application security group on the database port only.

 

정답 2, 5번

 

Correct

Correct. Putting the web tier in public subnets allows for greater access to the resource while protecting it from traffic on unrequired ports. Restricting traffic to the application and database tiers helps protect them from accidental and malicious access. It also helps ensure that each tier is accessed only through secure communication with the previous tier. To learn more about securing traffic in a VPC, see: Security groups for your VPC

 

 

5.

Question 5

A solutions architect has been given a large number of video files to upload to an Amazon S3 bucket. The file sizes are 100–500 MB. The solutions architect also wants to easily resume failed upload attempts. How should the solutions architect perform the uploads in the LEAST amount of time?

2.2 Select high-performing and scalable storage solutions for a workload

1 / 1 point

 

Split each file into 5-MB parts. Upload the individual parts normally and use S3 multipart upload to merge the parts into a complete object.

 

Using the AWS CLI, copy individual objects into the S3 bucket with the aws s3 cp command.
CLI를 사용하면 자동으로 multiuploading 기능을 제공 함

 

From the Amazon S3 console, select the S3 bucket. Upload the S3 bucket, and drag and drop items into the bucket.

 

Upload the files with SFTP and the AWS Transfer Family.

 

정답 2번

Correct

Correct. It is a best practice to use aws s3 commands (such as aws s3 cp) for multipart uploads and downloads. These aws s3 commands automatically perform multipart uploading and downloading based on the file size. To learn more about using the AWS CLI to perform multipart uploads, see: How do I use the AWS CLI to perform a multipart upload of a file to Amazon S3?

6.

Question 6

A gaming company is experiencing exponential growth. On multiple occasions, customers have been unable to access resources. To keep up with the increased demand, Management is considering deploying a cloud-based solution. The company is looking for a solution that can match the on-premises resilience of multiple data centers, and is robust enough to withstand the increased growth activity. Which configuration should a Solutions Architect implement to deliver the desired results?

1.2 Design highly available and/or fault-tolerant architectures

1 / 1 point

 

A VPC configured with an ELB Application Load Balancer targeting an EC2 Auto Scaling group consisting of Amazon EC2 instances in one Availability Zone è One AZ로는 fault-tolerant 를 보장할 수 없다.

 

Multiple Amazon EC2 instances configured within peered VPCs across two Availability Zones

 

A VPC configured with an ELB Network Load Balancer targeting an EC2 Auto Scaling group consisting of Amazon EC2 instances spanning two Availability Zones
Network LB는 다량의 request를 처리할 수 있다. Load balances at the transport layer (TCP/UDP Layer-4)
Network LB can handle traffic bursts, retain the source IP of the client and use a fixed IP for the life of the load balancer.

 

A VPC configured with an ELB Application Load Balancer targeting an EC2 Auto Scaling group consisting of Amazon EC2 instances spanning two AWS Regions
è load balances at the application layer (HTTP/HTTPS), path based routing.

 

정답 3번

 

Correct

Correct. The Network Load Balancer can handle millions of requests per second, while maintaining ultra-low latency. Combined with an Auto Scaling group, the Network Load Balancer can handle volatile traffic patterns. Setting the Auto Scaling group targets across multiple Availability Zones will make this highly available. To learn more about automatic scaling, see: Configure an Application Load Balancer or Network Load Balancer using the Amazon EC2 Auto Scaling console

 

 

7.

Question 7

A Solutions Architect must secure the network traffic for two applications running on separate Amazon EC2 instances in the same subnet. The applications are called Application A and Application B. Application A requires that inbound HTTP requests be allowed and all other inbound traffic be blocked. Application B requires that inbound HTTPS traffic be allowed and all other inbound traffic be blocked, including HTTP traffic. What should the Solutions Architect use to meet these requirements?

3.2 Design secure application tiers

0 / 1 point

 

Configure the access with network access control lists (network ACLs).
ACL
subnet 단위에서 작용함. EC2 instance들이 같은 subnet에 있기 때문에 이를 사용할 수 없다.

 

Configure the access with security groups. è 이게 답인가?
SGEC2 instance 단위로 적용 됨. Allow 만 허용되고 Deny는 설정 안 됨.
security group acts as a virtual firewall, controlling the traffic that is allowed to reach and leave the resources that it is associated with. For example, after you associate a security group with an EC2 instance, it controls the inbound and outbound traffic for the instance.

 

Configure the network connectivity with VPC peering.
RegionVPC를 프라이빗 주소(IPv4, IPv6)를 사용하여 라우팅 해 줌

 

Configure the network connectivity with route tables. è 두번째 시도 답 -틀림
The route table contains existing routes with targets other than a network interface, Gateway Load Balancer endpoint, or the default local route. The route table contains existing routes to CIDR blocks outside of the ranges in your VPC. Route propagation is enabled for the route table.

 

 

Incorrect

Incorrect. Though network ACLs can allow and block traffic, they operate at the subnet boundary. They use one set of rules for all traffic that enters or leaves a particular subnet. Because the EC2 instances for both applications are in the same subnet, they would use the same network ACL. However, the question requires different security requirements for each application. To learn more about securing traffic as it enters or leaves a subnet, see: Network ACLs

Incorrect

Incorrect. A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed. It does not provide any ability to block traffic as requested for applications that are in the same subnet. To learn more about routing in Amazon VPC, see: Route tables for your VPC

두번째 시도 4번도 틀림

정답 2
Correct

Correct. A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. They support allow rules only, and they block all other traffic if a matching rule is not found. Security groups are applied specifically at the instance level, so different instances in the same subnet can have different rules applied to them. To learn more about securing traffic at the EC2 instance boundary, see: Security groups for your VPC

 

 

8.

Question 8

A data processing facility wants to move a group of Microsoft Windows servers to the AWS Cloud. Theses servers require access to a shared file system that can integrate with the facility's existing Active Directory infrastructure for file and folder permissions. The solution needs to provide seamless support for shared files with AWS and on-premises servers and allow the environment to be highly available. The chosen solution should provide added security by supporting encryption at rest and in transit. Which storage solution would meet these requirements?

4.1 Identify cost-effective storage solutions

0 / 1 point

 

An Amazon S3 File Gateway joined to the existing Active Directory domain

 

An Amazon FSx for the Windows File Server file system joined to the existing Active Directory domain
FSx for Windows File Server는 윈도우즈와 친화성이 매우 높다.

 

An Amazon Elastic File System (Amazon EFS) file system joined to an AWS Managed Microsoft AD domain
EFS
는 리눅스 기반임

 

An Amazon S3 bucket mounted on Amazon EC2 instances in multiple Availability Zones running Windows Server

 

Incorrect

Incorrect. Amazon EFS is a scalable, elastic file system for Linux based workloads. It is not supported for the Windows based instances. To learn more about Amazon EFS, see: What is Amazon Elastic File System?

 

정답 2두번째 시도에 맞춤
Correct

Correct. Amazon FSx provides a fully managed native Microsoft Windows file system so you can easily move your Windows-based applications that require file storage to AWS. With Amazon FSx, there are no upfront hardware or software costs. You pay for only the resources used, with no minimum commitments, setup costs, or additional fees. To learn more about Amazon FSx, see: What is FSx for Windows File Server? To learn more about Using Microsoft Windows file shares, see: Using Microsoft Windows file shares

 

9.

Question 9

A Solutions Architect notices an abnormal amount of network traffic coming from an Amazon EC2 instance. The traffic is determined to be malicious and the destination needs to be determined. What tool can the Solutions Architect use to identify the destination of the malicious network traffic?

3.2 Design secure application tiers

1 / 1 point

 

Enable AWS CloudTrail and filter the logs.

 

Enable VPC Flow Logs and filter the logs.

 

Consult the AWS Personal Health Dashboard.

 

Filter the logs from Amazon CloudWatch.

 

정답 2번

Correct

Correct. VPC Flow Logs is a feature that you can use to capture information about the IP traffic going to and from network interfaces in a VPC. To learn more about flow log basics, see: VPC Flow Logs

 

 

10.

Question 10

A company is deploying an environment for a new data processing application. This application will be frequently accessed by 20 different departments across the globe seeking to run analytics. The company plans to charge each department for the cost of that department's access. Which solution will meet these requirements with the LEAST effort?

2.2 Select high-performing and scalable storage solutions for a workload

1 / 1 point

 

Amazon Aurora with global databases. Each department will query a database in a different Region, and the Region is tagged in the billing console.

 

PostgreSQL on Amazon RDS, with read replicas for each department. Each department will query the read replica tagged for their team in the billing console.

 

Amazon Redshift, with clusters set up for each department. Each department will query the cluster tagged for their team in the billing console.

 

Amazon Athena with workgroups set up for each department. Each department will query via the workgroup tagged for their team in the billing console.

 

정답 4번

 

Correct

Correct. Amazon Athena can query data in Amazon S3, and workgroups are purpose-built for cost allocation. For more information about Amazon Athena workgroups, see: Using Workgroups to Control Query Access and Costs

 

 

11.

Question 11

A company is migrating its on-premises application to Amazon Web Services and refactoring its design. The design will consist of frontend Amazon EC2 instances that receive requests, backend EC2 instances that process the requests, and a message queuing service to address decoupling the application. The Solutions Architect has been informed that a key aspect of the application is that requests are processed in the order in which they are received. Which AWS service should the Solutions Architect to decouple the application?

1.3 Design decoupling mechanisms using AWS services

1 / 1 point

 

Amazon Simple Queue Service (Amazon SQS) standard queue

 

Amazon Simple Notification Service (Amazon SNS)

 

Amazon Simple Queue Service (Amazon SQS) FIFO queue

 

Amazon Kinesis

 

정답 3번

 

Correct

Correct. Amazon SQS FIFO (First In First Out) queues process messages in the order they are received. To learn more about Amazon SQS queue types, see: Amazon SQS features

 

 

12.

Question 12

An API receives a high volume of sensor data. The data is written to a queue before being processed to produce trend analysis and forecasting reports. With the current architecture, some data records are being received and processed more than once. How can a solutions architect modify the architecture to ensure that duplicate records are not processed?

1.3 Design decoupling mechanisms using AWS services

1 / 1 point

 

Configure the API to send the records to Amazon Kinesis Data Streams.

 

Configure the API to send the records to Amazon Kinesis Data Firehose.

 

Configure the API to send the records to Amazon Simple Notification Service (Amazon SNS).

 

Configure the API to send the records to an Amazon Simple Queue Service (Amazon SQS) FIFO queue.

 

정답 4번

 

Correct

Correct: The FIFO queue improves on and complements the standard queue. The most important features of this queue type are FIFO (First-In-First-Out) delivery and exactly-once processing. The order that messages are sent and received in is strictly preserved. A message is delivered once, and remains available until a consumer processes and deletes it. Duplicates are not introduced into the FIFO queue. To learn more about Amazon SQS and FIFO queues, see: Message ordering

 

 

13.

Question 13

After reviewing the cost optimization checks in AWS Trusted Advisor, a team finds that it has 10,000 Amazon Elastic Block Store (Amazon EBS) snapshots in its account that are more than 30 days old. The team has determined that it needs to implement better governance for the lifecycle of its resources. Which actions should the team take to automate the lifecycle management of the EBS snapshots with the LEAST effort? (Select TWO.)

4.1 Identify cost-effective storage solutions

0 / 1 point

 

Create and schedule a backup plan with AWS Backup. è 이게 답인것 같음
AWS Backup 사용하면 AWS 서비스 하이브리드 워크로드에서 데이터 보호를 중앙 집중화하고 자동화할 있습니다. AWS Backup 정책을 기반으로 대규모 데이터 보호를 간편하고 비용 효율적으로 수행할 있는 완전관리형 서비스입니다.
Correct

Correct. The team wants to automate the lifecycle management of EBS snapshots. AWS Backup is a centralized backup service that automates backup processes for application data across AWS services in the AWS Cloud. It is designed to help you meet business and regulatory backup compliance requirements. AWS Backup provides a central place where you can configure and audit the AWS resources that you want to back up. You can also automate backup scheduling, set retention policies, and monitor all recent backup and restore activity. To learn more, see: What is AWS Backup?

 

 

Copy the EBS snapshots to Amazon S3, and then create lifecycle configurations in the S3 bucket.
좀 더 간단한 방법이 있다.

This should not be selected

Incorrect. Though this solution meets the technical requirement, it does not meet the requirement for the least effort. To copy EBS snapshots and set up lifecycle policies on the S3 bucket, the team would need to provide manual effort or create scripts that would need to be hosted somewhere. To learn more, see: Copy an Amazon EBS snapshot

 

Use Amazon Data Lifecycle Manager (Amazon DLM).

Correct

Correct. With Amazon DLM, you can manage the lifecycle of your AWS resources through lifecycle policies. Lifecycle policies automate operations on specified resources. The team requires lifecycle management for EBS snapshots, and Amazon DLM supports EBS volumes and snapshots. To learn more about Amazon DLM, see: Amazon Data Lifecycle Manager

 

Use a scheduled event in Amazon EventBridge (Amazon CloudWatch Events) and invoke AWS Step Functions to manage the snapshots.
Amazon EventBridge 자체 애플리케이션, 통합 Software-as-a-Service(SaaS) 애플리케이션 AWS 서비스에서 생성된 이벤트를 사용하여 이벤트 기반 애플리케이션을 대규모로 손쉽게 구축할 있는 서버리스 이벤트 버스입니다.

 

Schedule and run backups in AWS Systems Manager.

정답 1,3 – 두번째 시도에 맞춤

 

 

14.

Question 14

A company is deploying a production portal application on AWS. The database tier runs on a MySQL database. The company requires a highly available database solution that maximizes ease of management. How can the company meet these requirements?

1.2 Design highly available and/or fault-tolerant architectures

1 / 1 point

 

Deploy the database on multiple Amazon EC2 instances that are backed by Amazon Elastic Block Store (Amazon EBS) across multiple Availability Zones. Schedule periodic EBS snapshots.

 

Use Amazon RDS with a Multi-AZ deployment. Schedule periodic database snapshots.

 

Use Amazon RDS with a Single-AZ deployment. Schedule periodic database snapshots.

 

Use Amazon DynamoDB with an Amazon DynamoDB Accelerator (DAX) cluster. Create periodic on-demand backups.

 

정답 2번

 

Correct

Correct. Amazon RDS with a Multi-AZ deployment provides automatic failover with minimum manual intervention and it is highly available. To learn more, see: High availability (Multi-AZ) for Amazon RDS

 

 

15.

Question 15

A company requires operating system permissions on a relational database server. What should a solutions architect suggest as a configuration for a highly available database architecture?

1.2 Design highly available and/or fault-tolerant architectures

0 / 1 point

 

Multiple Amazon EC2 instances in a database replication configuration that uses two Availability Zones è 이게 답인가?

 

A database installed on a single Amazon EC2 instance in an Availability Zone

 

Amazon RDS in a Multi-AZ configuration with Provisioned IOPS

 

Multiple Amazon EC2 instances in a replication configuration that uses a placement group

Incorrect

Incorrect. This solution meets the requirement for high availability, but it does not provide access to the operating system. To learn more about when to use EC2 instances, see: Amazon EC2 for Oracle - When to choose Amazon EC2

정답 1 – 두번째 시도에 맞춤
Correct

Correct. EC2 instances allow access to the operating system. In addition, spanning two Availability Zones helps ensure high availability. To learn more about best practices for databases, see: Web Application Hosting in the AWS Cloud

 

 

16.

Question 16

A company has developed an application that processes photos and videos. When users upload photos and videos, a job processes the files. The job can take up to 1 hour to process long videos. The company is using Amazon EC2 On-Demand Instances to run web servers and processing jobs. The web layer and the processing layer have instances that run in an Auto Scaling group behind an Application Load Balancer. During peak hours, users report that the application is slow and that the application does not process some requests at all. During evening hours, the systems are idle. What should a solutions architect do so that the application will process all jobs in the MOST cost-effective manner?

2.1 Identify elastic and scalable compute solutions for a workload

1 / 1 point

 

Use a larger instance size in the Auto Scaling groups of the web layer and the processing layer.

 

Use Spot Instances for the Auto Scaling groups of the web layer and the processing layer.

 

Use an Amazon Simple Queue Service (Amazon SQS) standard queue between the web layer and the processing layer. Use a custom queue metric to scale the Auto Scaling group in the processing layer.

 

Use AWS Lambda functions instead of EC2 instances and Auto Scaling groups. Increase the service quota so that sufficient concurrent functions can run at the same time.

 

정답 3번

 

Correct

Correct. The Auto Scaling group can scale in response to changes in system load in an SQS queue. Even if the Auto Scaling group is at its maximum capacity, jobs will be saved in the queue and they will be processed when compute resources become available. To learn more, see: Scaling based on Amazon SQS

 

 

17.

Question 17

A company is developing an application that runs on Amazon EC2 instances in a private subnet. The EC2 instances use a NAT gateway to access the internet. A solutions architect must provide a secure option so that developers can log in to the instances. Which solution meets these requirements MOST cost-effectively?

4.3 Design cost-optimized network architectures

0 / 1 point

 

Configure AWS Systems Manager Session Manager for the EC2 instances to enable login. è 이게 답인가?

 

Configure a bastion host in a public subnet to log in to the EC2 instances in a private subnet.

 

Use the existing NAT gateway to log in to the EC2 instances in a private subnet. è 두번째 시도 답
A NAT gateway is a Network Address Translation (NAT) service. You can use a NAT gateway so that instances in a private subnet can connect to services outside your VPC but external services cannot initiate a connection with those instances.

 

Configure AWS Site-to-Site VPN to log in directly to the EC2 instances.

Incorrect

Incorrect. Bastion hosts solve the functional requirement, but they increase costs because one or more instances would be required. To learn more, see: AWS Quick Starts - Linux Bastion Hosts on AWS

Incorrect

Incorrect. You cannot use NAT gateways to log in to EC2 instances because NAT gateways are gateways that handle only outbound traffic. To learn more, see: NAT gateways

두번째 시도에 3틀림

정답 1
Correct

Correct. Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. There is no additional charge for accessing EC2 instances by using Session Manager. To learn more about Session Manager, see: AWS Systems Manager Session Manager To learn more about Session Manager pricing, see: AWS Systems Manager pricing

 

 

18.

Question 18

A company is using an Amazon S3 bucket to store archived data for audits. The company needs long-term storage for the data. The data is rarely accessed and must be available for retrieval the next business day. After a quarterly review, the company wants to reduce the storage cost for the S3 bucket. A solutions architect must recommend the most cost-effective solution to store the archived data. Which solution will meet these requirements?

4.1 Identify cost-effective storage solutions

1 / 1 point

 

Store the data on an Amazon EC2 instance that uses Amazon Elastic Block Store (Amazon EBS).

 

Use an S3 Lifecycle configuration rule to move the data to S3 Standard-Infrequent Access (S3 Standard-IA).

 

Store the data in S3 Glacier.

 

Store the data in another S3 bucket in a different AWS Region.

 

정답 3번

Correct

Correct. Out of these options, S3 Glacier is the most cost-effective solution. S3 Glacier is a good fit for archival data that does not need to be frequently accessed or modified. For more information about S3 Glacier, see: What Is S3 Glacier? To learn more about retrieval options for S3 Glacier, see: Retrieving S3 Glacier Archives

 

 

19.

Question 19

A solutions architect must create a disaster recovery (DR) solution for a company's business-critical applications. The DR site must reside in a different AWS Region than the primary site. The solution requires a recovery point objective (RPO) in seconds and a recovery time objective (RTO) in minutes. The solution also requires the deployment of a completely functional, but scaled-down version of the applications. Which DR strategy will meet these requirements?

1.2 Design highly available and/or fault-tolerant architectures

0 / 1 point

 

Multi-site active-active

 

Backup and restore

 

Pilot light

 

Warm standby è 이게 답인 것 같음

 

Incorrect

Incorrect. Multi-site active-active has an RPO and an RTO in real time and is considered a hot standby. Though this strategy will meet the RPO and RTO requirements, it is not a scaled down version of the applications (a stated requirement), and it will be more expensive than other options. To learn more about various DR strategies, see: Plan for Disaster Recovery (DR) - Use defined recovery strategies to meet the recovery objectives

 

정답 4두번째 시도에 맞춤
Correct

Correct. With warm standby (fully working at low capacity), all components run at a low capacity. The RPO is in seconds, and the RTO is in minutes. To learn more about various DR strategies, see: Plan for Disaster Recovery (DR) - Use defined recovery strategies to meet the recovery objectives

 

 

20.

Question 20

A financial services company is migrating its multi-tier web application to AWS. The application architecture consists of a fleet of web servers, application servers, and an Oracle database. The company must have full control over the database's underlying operating system, and the database must be highly available. Which approach should a solutions architect use for the database tier to meet these requirements?

1.2 Design highly available and/or fault-tolerant architectures

1 / 1 point

 

Migrate the database to an Amazon RDS for Oracle DB Single-AZ DB instance.

 

Migrate the database to an Amazon RDS for Oracle Multi-AZ DB instance.

 

Migrate to Amazon EC2 instances in two Availability Zones. Install Oracle Database and configure the instances to operate as a cluster.

 

Migrate to Amazon EC2 instances in a single Availability Zone. Install Oracle Database and configure the instances to operate as a cluster.

 

정답 3번

 

Correct

Correct. This solution provides the company with full control of the database operating system. The solution also provides high availability. To learn more about when Amazon EC2 is a good option, see: Amazon EC2 for Oracle

 

 

21.

Question 21

A hospital client is migrating from another cloud provider to AWS and is looking for advice on modernizing as they migrate. They have containerized applications that run on tablets. During spikes caused by increases in patient visits, the communications from the applications to the central database occasionally fail. As a result, the client currently has the applications try to write to the central database once, and if that write fails, it writes to a dedicated application PostgreSQL database run by the hospital IT team on premises. Each of those PostgreSQL databases then sends batch information on to the central database. The client is asking for recommendations for migrating or refactoring the database write process to decrease operational overhead. What should the solutions architect recommend? (Select TWO.)

4.2 Identify cost-effective compute and database services

1 / 1 point

 

Migrate the containerized applications to AWS Fargate.

 

Migrate the local databases to Aurora Serverless for PostgreSQL.

Correct

Correct. PostgreSQL has been turned into a kind of messaging service (holding all of the data until the batch job runs), and that is better handled by a queuing service. However, moving to Aurora Serverless will still decrease overhead for running the database, and it is a valid answer. To learn more, see: Amazon Aurora Serverless

 

Migrate the PostgreSQL databases to an RDS instance with a read replica that replaces each of the local databases.

 

Refactor the applications to use Amazon Simple Queue Service and eliminate the local PostgreSQL databases.

Correct

Correct. The client can decouple the messaging aspect of the application and remove the databases (which are effectively a workaround messaging service). To learn more about, see: How Amazon SQS works

 

Refactor the central database to add an Amazon ElastiCache lazy loading cache in front of the database.

 

정답 2,4 번

 

 

22.

Question 22

A large international company has a management account in AWS Organizations, and over 50 individual accounts for each country they operate in. Each of the country accounts has least four VPCs set up for functional divisions. There is a high amount of trust across the accounts, and communication among all of the VPCs should be allowed. Each of the individual VPCs throughout the entire global organization will need to access an account and VPC that provide shared services to all the other accounts. How can the member accounts access the shared services VPC with the LEAST operational overhead?

2.3 Select high-performing networking solutions for a workload

1 / 1 point

 

Create an Application Load Balancer, with a target of the private IP address of the shared services VPC. Add a Certification Authority Authorization (CAA) record for the Application Load Balancer to Amazon Route 53. Point all requests for shared services in the routing tables of the VPCs to the CAA record.

 

Create a peering connection between each of the VPCs and the shared services VPC.

 

Create a Network Load Balancer across the Availability Zones in the shared services VPC. Create service consumer roles in IAM, and set endpoint connection acceptance to automatically accept. Create consumer endpoints in each division VPC and point to the Network Load Balancer.

 

Create a VPN connection between each of the VPCs and the shared service VPC.

 

정답 3번

 

Correct

Correct. This solution provides the general flow of how an AWS PrivateLink connection is established. To learn more, see: Interface VPC endpoints (AWS PrivateLink)

 

 

23.

Question 23

A SysOps administrator is looking into a way to automate the deployment of new SSL/TLS certificates to their web servers, and a centralized way to track and manage the deployed certificates. Which AWS service can the administrator use to fulfill the above-mentioned needs?

3.2 Design secure application tiers

1 / 1 point

 

AWS Key Management Service

 

AWS Certificate Manager

 

Configure AWS Systems Manager Run Command

 

AWS Systems Manager Parameter Store

 

정답 2번

 

Correct

Correct. AWS Certificate Manager (ACM) is a service that you can use to provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal, connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the internet, in addition to resources on private networks. ACM reduces the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates. To learn more, see: AWS Certificate Manager

 

 

24.

Question 24

A client has created a website (www.example.com), with an Application Load Balancer in a public subnet. The load balancer targets an application hosted on EC2 instances in private subnets, which rely on an Amazon Aurora PostgreSQL-Compatible Edition DB instance in separate private subnets. When testing the website, static content from the EC2 instance is displayed, but any content driven by database queries fails to load. What should the administrator check?

1.1 Design a multi-tier architecture solution

0 / 1 point

 

Check the Amazon Route 53 CNAME record to ensure that www.example.com points to the top-level domain (example.com).

 

Check the network access control list (network ACL) of the application subnets for an outbound allow statement.
è 두번째 시도 답 틀림

 

Check that the route table for the database subnets includes a default route to the internet gateway for the VPC.
è 첫번째 시도 답 틀림

 

Check if the security group of the database subnet allows inbound traffic from the EC2 subnets. è 이게 답인가?

Incorrect

Incorrect. The database should be interacting with the EC2 subnet, which should return information to the Application Load Balancer. Providing access to the internet gateway could make the database subnet public instead of private. To learn more, see: Internet gateways

Incorrect

Incorrect. The EC2 instances are able to return information to the Application Load Balancer and out to the browser, so the network ACL is not blocking anything at the VPC level. To learn more, see: Security Groups and Network Access Control Lists (Network ACLs) (BP5)

두번째 시도 2틀림

정답 4
Correct. The database security group is likely not configured for inbound traffic from the EC2 layer. To learn more, see: Security Groups and Network Access Control Lists (Network ACLs) (BP5)

 

 

25.

Question 25

A solutions architect has been tasked with designing a three-tier application for deployment in AWS. There will be a web tier as the frontend, a backend application tier for data processing, and a database that will be hosted on Amazon RDS. The application frontend will be distributed to end users by CloudFront. Following best practices, it is decided that there should not be any point-to-point dependencies between the different layers of the infrastructure. How many Elastic Load Balancing load balancers should the architect deploy in the architecture so that this application's design follows best practices?

1.1 Design a multi-tier architecture solution

0 / 1 point

 

Zero. Use the load balancer that is automatically enabled when CloudFront is deployed.

 

One load balancer. This load balancer would be between the web tier and the application tier.

 

Two load balancers. One public load balancer would direct traffic to the web tier, and one private load balancer would direct traffic to the application tier. è 이게 답인가?

 

Three load balancers. One public load balancer would direct traffic to the web tier. One private load balancer would direct traffic to the application tier. Another private load balancer would direct traffic to the Amazon RDS database.

Incorrect

Incorrect. Though deploying one load balancer is better than deploying none, the application might experience reliability issues between the tiers that do not have a load balancer in place. To learn more about best practices for deploying a web hosting environment, see: An AWS Cloud architecture for web hosting

 

정답 3두번째 시도에 맞춤
Correct

Correct.One load balancer will be deployed between CloudFront and the web tier. Another load balancer would be deployed between the web tier and the application tier. To learn more about best practices for deploying a web hosting environment, see: An AWS Cloud architecture for web hosting

 

 

26.

Question 26

The CIO of a company is concerned about the security of the account root user of their AWS account. How can the CIO ensure that the AWS account follows the best practices for logging in securely? (Select TWO.)

3.1 Design secure access to AWS resources

1 / 1 point

 

Enforce the use of an access key ID and secret access key for the account root user logins.

 

Enforce the use of MFA for the account root user logins.

Correct

Correct. For increased security, we recommend that you configure multi-factor authentication (MFA) to help protect your AWS resources. You can enable MFA for IAM users or the AWS account root user. When you enable MFA for the root user, it affects only the root user credentials. IAM users in the account are distinct identities with their own credentials, and each identity has its own MFA configuration. To learn more about using MFA for accounts in AWS Organizations, see: Best practices for member accounts To learn more about enabling MFA for the account root user, see: Using multi-factor authentication (MFA) in AWS

 

Enforce the account root user to assume a role to access the root user's own resources.

 

Enforce the use of complex passwords for member account root user logins.

Correct

Correct. The security of your account root user depends on the strength of its password. We recommend that you use a password that is long, complex, and not used anywhere else. To learn more about using complex passwords for accounts in AWS Organizations, see: Best practices for member accounts

 

Enforce the deletion of the AWS account so that it cannot be used.

 

정답 2,4번

 

 

27.

Question 27

A Solutions Architect has been tasked with creating a data store location that will be able to handle different file formats of unknown sizes. It is required that this data be highly available and protected from being accidentally deleted. What solution meets the requirements and is the MOST cost-effective?

3.3 Select appropriate data security options

1 / 1 point

 

Deploy an Amazon S3 bucket and enable Cross-Region Replication.

 

Deploy an Amazon DynamoDB table and enable Global Tables.

 

Deploy an Amazon S3 bucket and enable Object Versioning.

 

Deploy a database using Amazon RDS and configure a Multi-AZ deployment for that database.

 

정답 3번

 

Correct

Correct. Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite. For example, if you delete an object, Amazon S3 inserts a delete marker instead of removing the object permanently. The delete marker becomes the current object version. If you overwrite an object, it results in a new object version in the bucket. A user can always restore the previous version. To learn more about object versioning, see: Using versioning in S3 buckets

 

 

28.

Question 28

An organization is planning to migrate from an on-premises data center to an AWS environment that spans multiple Availability Zones. A migration engineer has been tasked to plan how to transfer the home directories and other shared network attached storage from the data center to AWS. The migration design should support connections from multiple Amazon EC2 instances running the Linux operating system to this common shared storage platform. What storage option best fits their design?

1.4 Choose appropriate resilient storage

1 / 1 point

 

Transfer the files to Amazon S3 and access that data from the EC2 instances.

 

Transfer the files to the EC2 Instance Store attached to the EC2 instances.

 

Transfer the files to Amazon EFS and mount that file system to the EC2 instances.

 

Transfer the files to one EBS volume and mount that volume to the EC2 instances.

 

정답 3번

 

Correct

Correct. Amazon EFS is well suited to support a broad spectrum of use cases from home directories to business-critical applications. Amazon EFS is designed to provide massively parallel shared access to thousands of EC2 instances. To learn more, see: Amazon Elastic File System

 

 

29.

Question 29

A company is designing a human genome application using multiple Amazon EC2 Linux instances. The high performance computing (HPC) application requires low latency and high performance network communication between the instances. Which solution provides the LOWEST latency between the instances?

1.1 Design a multi-tier architecture solution

0 / 1 point

 

Launch the EC2 instances in a cluster placement group. è 이게 답인가?

 

Launch the EC2 instances in a spread placement group.

 

Launch the EC2 instances in an Auto Scaling group spanning multiple Regions.

 

Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones within a Region.

 

Incorrect

Incorrect. Because a HPC platform would require packing instances close together, instances that span Availability Zones would not provide the lowest network latency. To learn more, see: What is Amazon EC2 Auto Scaling?

 

정답 1두번째 시도에 맞춤
Correct

Correct. In an EC2 cluster placement group, instances are physically close together inside an Availability Zone. With this strategy, workloads can achieve the low-latency network performance that is needed for tightly coupled, node-to-node communication that is typical of HPC applications. To learn more, see: Placement groups

 

 

30.

Question 30

A company has a web application in which customers can log in and read near-real-time status updates about their orders. The company hosts the application on Amazon EC2 instances and is expanding the application from the eu-west-1 Region into the us-east-1 Region. The application relies on an Amazon RDS for MySQL database. The company already has provisioned the necessary EC2 instances in the new Region. The company needs to deploy the application in us-east-1 with the least possible change to the application. The company also needs fast, local database queries in both Regions. Which modification of the database will meet these requirements?

2.4 Choose high-performing database solutions for a workload.

1 / 1 point

 

Migrate the RDS database to an Amazon Aurora global database. Add a secondary cluster in us-east-1.

 

Migrate the RDS database to an Amazon Aurora Serverless database. Configure automatic scaling in us-east-1.

 

Migrate the RDS database to an Amazon DynamoDB table. Create global tables for us-east-1.

 

Place an accelerator from AWS Global Accelerator in front of the RDS database to reduce the network latency from us-east-1.

 

정답 1번

 

Correct

Correct. This solution meets the requirements, and is designed for a replica latency of approximately 1 second. By using the global database, users receive a low-read latency, with writes occurring on the primary database cluster in eu-west-1. The current application can continue to use existing code that points to the local Aurora instance. To learn more, see: Using Amazon Aurora global databases

 

 

31.

Question 31

A company is building a distributed application, which will send sensor IoT data-- including weather conditions and wind speed from wind turbines--to the AWS Cloud for further processing. Because the nature of the data is spiky, the application needs to be able to scale. It is important to store the streaming data in a key-value database and then send it over to a centralized data lake, where it can be transformed, analyzed, and combined with diverse organizational datasets to derive meaningful insights and make predictions. Which combination of solutions would accomplish the business need with minimal operational overhead? (Select TWO.)

2.4 Choose high-performing database solutions for a workload.

0 / 1 point

 

Configure Amazon Kinesis to deliver streaming data to an Amazon S3 data lake. è 이게 답인가?
Correct

Correct. Kinesis can send streaming data to an Amazon S3 data lake. To learn more, see: Build a data lake using Amazon Kinesis Data Streams for Amazon DynamoDB and Apache Hudi

 

Use Amazon DocumentDB to store IoT sensor data.

 

Write AWS Lambda functions to deliver streaming data to Amazon S3.

 

Use Amazon DynamoDB to store the IoT sensor data, and enable DynamoDB Streams.

Correct

Correct. DynamoDB Streams can be used to start Lambda functions. Lambda could then be used to send an Amazon SNS notification, or take corrective measures if the threshold is breached. To learn more about DynamoDB Streams, see: Change Data Capture for DynamoDB Streams To learn more about use cases for DynamoDB Streams, see: DynamoDB Streams Use Cases and Design Patterns

 

Use Amazon Kinesis to deliver streaming data to Amazon Redshift, and enable Amazon Redshift Spectrum.

This should not be selected

Incorrect. Amazon Kinesis Data Firehose can deliver streaming data to Amazon Redshift. However, S3 is better choice for a data lake where data can be transformed, analyzed, and combined with diverse organizational datasets to derive meaningful insights and make predictions.

 

정답 1,4두번째 시도에 맞춤

 

반응형


반응형

Amazon Web Services (AWS) Certified - 4 Certifications!

Videos, labs & practice exams - AWS Certified (Solutions Architect, Developer, SysOps Administrator, Cloud Practitioner)

 

 

 

 

4.5 (13,383 ratings)

82,064 students enrolled

Created by BackSpace Academy

Last updated 3/2020

 English

 English [Auto-generated], Italian [Auto-generated], 2 more

 

 

 

Section 2: AWS Certified Developer Associate Quiz

 

The ec2-net-utils package is installed on Amazon Linux instances only. See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html

 

Question 1:

After you assign a secondary private IPv4 address to your instance, you need to configure the operating system on your instance to recognize the secondary private IP address. If you are using an Ubuntu Linux instance, the ec2-net-utils package can take care of this step for you.

  • ​True
  • ​False v

 

A queue name can have up to 80 characters. The following characters are accepted: alphanumeric characters, hyphens (-), and underscores (_). Queue names are case-sensitive.

 

Question 2:

Test-queue and test-queue are different queue names.

  • ​True v
  • ​False

 

See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html

 

Question 3:

You want to host multiple secure websites on a single EC2 server using multiple SSL certificates. How can you achieve this?

  • ​Assign a secondary private IPv4 address to a second attached network interface. Associate an elastic IP address with the private IPv4 address.  v
  • ​Assign a secondary public IPv4 address to a second attached network interface. Associate an elastic IP address with the public IPv4 address.
  • ​Assign a secondary private IPv6 address to a second attached network interface. Associate an elastic IP address with the private IPv6 address.
  • ​Assign a secondary public IPv6 address to a second attached network interface. Associate an elastic IP address with the public IPv6 address.
  • ​None of the above

 

The application must be packaged using the CLI package command and deployed using the CLI deploy command. See: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html https://docs.aws.amazon.com/cli/latest/reference/cloudformation/deploy/index.html https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-quick-start.html

 

Question 4:

You have created and tested an example Lambda Node.js application from the AWS Serverless Application Repository. What are the next steps? 

  • ​Cloudformation CLI package and deploy commands  v
  • ​Cloudformation CLI create-stack and update-stack commands 
  • ​Cloudformation CLI package-stack and deploy-stack commands 
  • ​Cloudformation CLI create-change-set and deploy-change-set 

 

See: https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html

 

Question 5:

You have an Amazon Kinesis Stream that is consuming records from an application. The kinesis Stream consists of multiple shards.  A Lambda function will process the records from the Kinesis Stream. What order will the records be processed. 

  • ​In the exact order it is received by the kinesis Stream on a FIFO basis 
  • ​In the exact order it is received by each Kinesis shard on a FIFO basis.  Order across shards is not guaranteed.   V
  • ​A standard kinesis stream does not have a guaranteed order. A FIFO kinesis stream will have the exact order it is received on a FIFO basis. 
  • ​A standard kinesis stream does not have a guaranteed order. A LIFO kinesis stream will have the exact order it is received on a LIFO basis. 

 

AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Using Step Functions, you can design and run workflows that stitch together services such as AWS Lambda and Amazon ECS into feature-rich applications. Workflows are made up of a series of steps, with the output of one step acting as input into the next. Application development is simpler and more intuitive using Step Functions, because it translates your workflow into a state machine diagram that is easy to understand, easy to explain to others, and easy to change.

 

Question 6:

You have an application that requires coordination between serverless and server based distributed applications. You would like to implement this as a state machine. What AWS service would you use?

  • ​SQS and SNS 
  • ​AWS Step Functions  v
  • ​EC2 and SNS 
  • ​AWS Amplify 

 

Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects. See: http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html

 

Question 7:

You have enabled server side encryption on an S3 bucket. How do you decrypt objects?

  • ​The key will be located in the KMS
  • ​The key can be accessed from the IAM console.
  • ​S3 automatically decrypts objects when you download them.  v
  • ​None of the above

 

RDS does not support autoscaling or load balancers Multi AZ deployment only affects availability Increasing the rds instance size increases write and read capacity Read replicas will increase the read capacity. Each read replica will have a different connection string. Route53 can be used to route requests to different instances each time.

 

Question 8:

You would like to increase the capacity of an rds application for read heavy workloads.  How would you do this?

  • ​Create an rds auto scaling group and load balancer 
  • ​Use multi AZ deployment 
  • ​Increase the size of the rds instance 
  • ​Add read replicas with multiple connection strings and use Route 53 Multivalue Answer Routing.   v

 

Bucket policies can only be applied at the bucket level not objects. You can although change object permissions using access control lists (ACLs). See: http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html

 

Question 9:

How can you apply an S3 bucket policy to an object?

  • ​Use the CLI --grants option
  • ​Use the CLI --policy option
  • ​Use the CLI --permissions option
  • ​None of the above v

 

User data is specified by you at instance launch. Instance metadata is data about your instance. Because your instance metadata is available from your running instance, you do not need to use the Amazon EC2 console, SDK or the AWS CLI. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

 

Question 10:

You have a web application running on an ec2 instance that needs to know the IP address that it is running on. How can the application get this information?

  • ​Use Curl or Get command to http://169.254.169.254/latest/meta-data/   v
  • ​Use Curl or Get command to http://169.254.169.254/latest/user-data 
  • ​Use API/SDK command get-host-address 
  • ​Use API/SDK command get-host-ip 

 

New volumes are raw block devices, and you need to create a file system on them before you can mount and use them. See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html

 

Question 11:

New EBS volumes are pre-formatted with a file system on them so you can easily mount and use them.

  • ​True
  • ​False  v

 

See: http://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html

 

Question 12:

You have created an alias in IAM for your company called super-duper-co. What will be the login address for your IAM users?

 

You can configure health checks, which are used to monitor the health of the registered instances so that the load balancer can send requests only to the healthy instances. See: http://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html

 

Question 13:

You have an ELB with multiple EC2 instances registered. One of the instances is unhealthy and not receiving traffic. After the instance becomes healthy again you will need to:

  • ​Change the private IP address of the instance and register with ELB
  • ​Change the public IP address of the instance and register with ELB
  • ​Do nothing, the ELB will automatically direct traffic to the instance when it becomes healthy.  v
  • ​None of the above

 

Never store credentials in application code. Roles used to be the preferred option before the introduction of VPC S3 endpoints. See: https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/

 

Question 14:

You have an application running on an EC2 instance inside a VPC that requires access to Amazon S3. What is the best solution?

  • ​Use AWS configure SDK command in your application to pass credentials via application code.
  • ​Create an IAM role for the EC2 instance
  • ​Create a VPC S3 endpoint   v
  • ​None of the above

 

A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same route table. See: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html

 

Question 15:

A VPC subnet can only be associated with one route table at a time, and you cannot associate multiple subnets with the same route table.

  • ​True
  • ​False  v

 

If you are writing code that uses other resources, such as a graphics library for image processing, or you want to use the AWS CLI instead of the console, you need to first create the Lambda function deployment package, and then use the console or the CLI to upload the package. See: https://docs.aws.amazon.com/lambda/latest/dg/nodejs-create-deployment-pkg.html

 

Question 16:

You have a node.js lambda function that relies upon and external graphics library. What is the best way to include the external graphics library without consuming excessive lambda compute resources. 

  • ​Install the libraries with NPM before creating the deployment package   v
  • ​Run an Arbitrary Executable script in AWS Lambda to install the libraries 
  • ​Create a second lambda function to install the libraries 
  • ​Upload library to S3 and import when lambda function executed. 

 

No such thing as an API deployment package or API snapshot. Stages are used to roll out updated APIs. Each stage will have its own URL as follows: https://api-id.execute-api.region.amazonaws.com/stage

 

Question 17:

You have created a JavaScript browser application that calls and API running on Amazon API Gateway.  You have made a breaking change to your API and you want to minimise the impact on existing users of your application. You would like all users to be migrated over to the new API within one month. What can you do? 

  • ​Create a new API and use the new URL in your updated JavaScript application. Delete  the old API after 1 month. 
  • ​Create a new stage and use the new URL in your updated JavaScript application. Delete  the old stage after 1 month.   v
  • ​Create a new API deployment package and use the new URL in your updated JavaScript application. Delete the old deployment package after 1 month. 
  • ​Create a new stage and use the new URL in your updated JavaScript application. Create an API snapshot then delete the stage after 1 month. 

 

See: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html

 

Question 18:

Your organisation would like to have clear separation of costs between departments. What is the best way to achieve this?

  • ​Tag resources by department
  • ​Tag resources by IAM group
  • ​Tag resources by IAM role
  • ​Create separate AWS accounts for departments and use consolidated billing.  v
  • ​None of the above

 

We recommend that you save access logs in a different bucket so that you can easily manage the logs. If you choose to save access logs in the source bucket, we recommend that you specify a prefix for all log object keys so that the object names begin with a common string and the log objects are easier to identify. When your source bucket and target bucket are the same bucket, additional logs are created for the logs that are written to the bucket. See: https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html#server-access-logging-overview

 

Question 19:

You have implemented server access logging on an S3 bucket. Your source and target buckets are the same. You are finding that your logs are significantly more then the actual objects being uploaded.  What is happening? 

  • ​You have enabled S3 replication on the log entries. 
  • ​You did not select compression on the S3 logs. 
  • ​S3 is creating growing logs of logs.    v
  • ​You did not select compression on the S3 lifecycle policy 

 

You can add an approval action to a stage in an AWS CodePipeline pipeline at the point where you want the pipeline to stop so someone can manually approve or reject the action. See: https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals-action-add.html

 

Question 20:

You would like to implement an approval process before a stage is deployed on AWS codepipeline. How would you do this? 

  • ​Implement CloudTrail monitoring for the PipeLine 
  • ​Implement CloudWatch monitoring for the PipeLine 
  • ​Apply an IAM Role to the PipeLine 
  • ​Add an approval action to the stage    v

 

You can detach an Amazon EBS volume from an instance explicitly or by terminating the instance. However, if the instance is running, you must first unmount the volume from the instance. If an EBS volume is the root device of an instance, you must stop the instance before you can detach the volume. When a volume with an AWS Marketplace product code is detached from an instance, the product code is no longer associated with the instance. See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html

 

Question 21:

You have an EBS volume  which also the root device attached to a running EC2 instance. What do you need to do to enable you to detach it?

  • ​Unmount the volume then detach.
  • ​Stop the instance then detach.   v
  • ​Unmount volume, then stop the instance and then detach
  • ​None of the above

 

You can make API requests directly or by using an integrated AWS service that makes API requests to AWS KMS on your behalf. The limit applies to both kinds of requests. You might store data in Amazon S3 using server-side encryption with AWS KMS (SSE-KMS). Each time you upload or download an S3 object that's encrypted, Amazon S3 makes a GenerateDataKey (for uploads) or Decrypt (for downloads) request to AWS KMS on your behalf. These requests count toward your limit, so AWS KMS throttles the requests. See: https://docs.aws.amazon.com/kms/latest/developerguide/limits.html#requests-per-second

 

Question 22:

You have a JavaScript application that is used to upload objects to Amazon S3 by hundreds of thousands of clients. You are using server side encryption with the AWS Key Management Service. You are finding that many requests are not working. What is going on? 

  • ​You have KMS key rotation implemented 
  • ​You have exceeded the KMS API call limit    v
  • ​The user STS token has expired 
  • ​There is a problem with the bucket permissions 

 

If the front-end connection uses TCP or SSL, then your back-end connections can use either TCP or SSL. If the front-end connection uses HTTP or HTTPS, then your back-end connections can use either HTTP or HTTPS. See: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html

 

Question 23:

If the front-end connection of your Classic ELB uses HTTP or HTTPS, then your back-end connections can use ___________.

  • ​TCP or SSL
  • ​TCP, SSL, HTTP or HTTPS
  • ​HTTP or HTTPS       v
  • ​None of the above

 

A bucket owner cannot grant permissions on objects it does not own. For example, a bucket policy granting object permissions applies only to objects owned by the bucket owner. However, the bucket owner, who pays the bills, can write a bucket policy to deny access to any objects in the bucket, regardless of who owns it. The bucket owner can also delete any objects in the bucket. See: http://docs.aws.amazon.com/AmazonS3/latest/dev/access-policy-alternatives-guidelines.html

 

Question 24:

You have given S3 bucket access to another AWS account. You are trying to change an object's permissions but can't. What do you need to do?

  • ​Change the bucket ACL to public
  • ​Change the bucket policy to public
  • ​Ask the object owner to change permissions    v
  • ​None of the above

 

Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources. See: http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html

 

Question 25:

You have a HTML5 website with a custom domain name on S3. You have a public software library on another S3 bucket but your browser prevents it from loading. What do you need to do?

  • ​create a public bucket policy
  • ​enable CORS on the website bucket    v
  • ​create a public bucket ACL
  • ​create a public object ACL
  • ​None of the above

 

Long polling helps reduce your cost of using Amazon SQS by reducing the number of empty response. You can enable long polling using the AWS Management Console by setting a Receive Message Wait Time to a value greater than 0. See: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-long-polling.html

 

Question 26:

You have an application that is polling an SQS queue continuously and wasting resources when the queue is empty. What can you do to reduce the resource overhead?

  • ​Implement a load balancer
  • ​Implement a load balancer and autoscaling group of EC2 instances
  • ​Implement a load balancer, autoscaling group of EC2 instances linked to a queue length CloudWatch alarm
  • ​Increase ReceiveMessageWaitTimeSeconds     v
  • ​Increase queue visibility Timeout
  • ​None of the above

 

See: https://docs.aws.amazon.com/lambda/latest/dg/lambda-app.html#lambda-app-deploy

 

Question 27:

You have created a NodeJS Lambda function that requires access to multiple third party packages and libraries.  The function integrates with other AWS serverless services. You would like to deploy this application and be able to rollback any deployments that are not successful. 

  • ​Create a zip file containing your code and libraries. Upload the deployment package using the AWS CLI/SDKs CreateFunction. 
  • ​Create a zip file containing your code and libraries. Upload the deployment package using the Lambda console. 
  • ​Create a zip file containing your code and libraries. Upload the deployment package using the Lambda console or AWS CLI/SDKs CreateFunction.     v
  • ​Create a zip file containing your code and libraries. Upload the deployment package using the Serverless application model (SAM) console. 

 

Amazon RDS uses the MariaDB, MySQL, and PostgreSQL (version 9.3.5 and later) DB engines' built-in replication functionality to create a special type of DB instance called a Read Replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the Read Replica. When a replica is promoted to master it no longer synchronizes with source DB but other instance still synchronize with source DB. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Promote

 

Question 28:

If you have multiple Read Replicas for a master DB Instance and you promote one of them, the remaining Read Replicas will still replicate from the older master DB Instance.

  • ​True    v
  • ​False

 

When you update a stack, you submit changes, such as new input parameter values or an updated template. AWS CloudFormation compares the changes you submit with the current state of your stack and updates only the changed resources. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks.html

 

Question 29:

When you update a stack, you modify the original stack template then AWS CloudFormation :

  • ​updates only the resources that you modified     v
  • ​updates all the resources defined in the template
  • ​None of the above

 

When you rename a DB instance, the endpoint for the DB instance changes, because the URL includes the name you assigned to the DB instance. You should always redirect traffic from the old URL to the new one. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RenameInstance.html

 

Question 30:

When you rename a DB instance, the endpoint for the DB instance does not change.

  • ​True
  • ​False   v

 

Once you version-enable a bucket, it can never return to an unversioned state. You can, however, suspend versioning on that bucket. https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

 

Question 31:

Once you version-enable a bucket, it can never return to an unversioned state.

  • ​True    v
  • ​False

 

All read replicas associated with a DB instance remain associated with that instance after it is renamed. For example, suppose you have a DB instance that serves your production database and the instance has several associated read replicas. If you rename the DB instance and then replace it in the production environment with a DB snapshot, the DB instance that you renamed will still have the read replicas associated with it. See: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RenameInstance.html

 

Question 32:

All read replicas associated with a DB instance remain associated with that instance after it is renamed.

  • ​True    v
  • ​False

 

An application on the instance retrieves the security credentials provided by the role from the instance metadata item iam/security-credentials/role-name. The application is granted the permissions for the actions and resources that you've defined for the role through the security credentials associated with the role. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials

 

Question 33:

How can you check an IAM role for permissions to a Kinesis stream is associated to an EC2 instance? 

  • ​CLI command STSAssumeRole followed by describeStreams 
  • ​Check the EC2 instance metadata at iam/security-credentials/role-name     v
  • ​Check the Kinesis stream logs using the console 
  • ​SDK command STSAssumeRole followed by describeStreams 

 

/tmp (local storage) is guaranteed to be available during the execution of your Lambda function. Lambda will reuse your function when possible, and when it does, the content of /tmp will be preserved along with any processes you had running when you previously exited. However, Lambda doesn't guarantee that a function invocation will be reused, so the contents of /tmp (along with the memory of any running processes) could disappear at any time. You should think of /tmp as a way to cache information that can be regenerated or for operations that require a local filesystem, but not as a permanent

 

Question 34:

You have a browser application hosted on Amazon S3. It is making requests to an AWS lambda function.  Every time the lambda function is called you lose the session data on the lambda function. What is the best way to store the data used across multiple lambda functions. 

  • ​Store in lambda function localstorage 
  • ​Use AWS SQS 
  • ​Use Amazon Dynamodb      v
  • ​Use an Amazon Kinesis data stream 

 

See: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html

 

Question 35:

You have created an e-commerce site using DynamoDB. When creating a primary key on a table which of the following would be the best attribute for the primary key?

  • ​division_id where there are few divisions to many products
  • ​user_id where there are many users to few products      v
  • ​product_id where there are many products to many users
  • ​None of the above

 

Changes the visibility timeout of a specified message in a queue to a new value. The maximum allowed timeout value is 12 hours. See: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ChangeMessageVisibility.html

 

Question 36:

Using ChangeMessageVisibility from the AWS SQS API will do what?

  • ​Changes the visibility timeout of a specified message in a queue to a new value.   v
  • ​Changes the message visibility from true to false.
  • ​Deletes the message after a period of time.
  • ​None of the above

 

To host your static website, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket. The website is then available at the region-specific website endpoint of the bucket: .s3-website-.amazonaws.com See: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

 

Question 37:

You've enabled website hosting on a bucket named 'backspace.academy' in the us-east-1 (us standard region). Select the URL you'll receive from AWS as the URL for the bucket.

  • ​backspace.academy.s3-website-us-east-1.amazonaws.com     v
  • ​backspace.academy.s3-website.amazonaws.com
  • ​backspace.academy.us-east-1-s3-website.amazonaws.com
  • ​backspace.academy.s3-website-us-east.amazonaws.com

 

Lambda by default can handle up to 1000 concurrent executions. Elasticache will not speed up writes it will only speed up read access. Increasing the size of the rds instance will increase its capacity to handle concurrent connections.

 

Question 38:

You have created a lambda function to  insert information to an RDS database over 20 times per minute.  You are finding that the execution time is excessive. How can you improve the  performance? 

  • ​increase the compute capacity of the lambda function to enable more concurrent  connections 
  • ​increase the memory of the lambda function to enable more concurrent  connections 
  • ​increase the size of the rds instance      v
  • ​implement elasticache in front of the database. 

 

The application must: - have the X-ray Daemon running on it and, - assume a role that has xray:PutTraceSegments and xray:PutTelemetryRecords permissions.

 

Question 39:

You are using AWS X-ray to record trace data for requests to your application running on EC2. Unfortunately the trace data is not appearing in the X-ray console. You are in the Sao Paulo region. What is the most probable cause? 

  • ​You do not have permission for x-ray console access 
  • ​the ec2 instance does not have a role with permissions to send trace segments or telemetry records      v
  • ​AWS X-ray does not support ec2 instances 
  • ​Sao Paulo region does not support AWS X-ray 

 

See: http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html

 

Question 40:

Parts of a multipart upload will not be completed until the 'complete' request has been called which puts all the parts of the file together.

  • ​True     v
  • ​False

 

Deployment package size limits cannot be changed. Create multiple Lambda functions and coordinate using AWS Step Functions to reduce the package sizes. See: https://docs.aws.amazon.com/lambda/latest/dg/limits.html

 

Question 41:

You have created a lambda function that is failing when deployed due to the size of the deployment package zip file. What can you do? 

  • ​Request a limit increase from AWS 
  • ​Create multiple Lambda functions and coordinate using AWS Step Functions 
  • ​Upload as a tar file with higher compression 
  • ​Increase Lambda function memory allocation 

 

See: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LSI.html

 

Question 42:

The hash key of the DynamoDB __________ is the same attribute as the hash key of the table. The range key can be any scalar table attribute.

  • ​Local Secondary Index      v
  • ​Local Primary Index
  • ​Global Secondary Index
  • ​Global Primary Index

 

The DisableApiTermination attribute controls whether the instance can be terminated using the console, CLI, or API. By default, termination protection is disabled for your instance. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingDisableAPITermination

 

Question 43:

The DisableConsoleTermination attribute controls whether the instance can be terminated using the console, CLI, or API. By default, termination protection is disabled.

  • ​True
  • ​False     v

 

There is no such thing as requireMFA. Multi-factor authentication (MFA) increases security for your app by adding another authentication method, and not relying solely on user name and password. You can choose to use SMS text messages, or time-based one-time (TOTP) passwords as second factors in signing in your users. See: https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-mfa.html

 

Question 44:

You have developed a browser JavaScript application that uses the AWS software development kit.  The application accesses sensitive data and you would like to implement Multi Factor authentication. How would you achieve this? 

  • ​Use IAM Multi Factor authentication (MFA) 
  • ​Use Cognito Multi Factor authentication (MFA)       v
  • ​Use requireMFA in the AWS SDK 
  • ​Use IAM.requireMFA in the AWS SDK 

 

Packages the local artifacts (local paths) that your AWS CloudFormation template references. The command uploads local artifacts, such as source code for an AWS Lambda function or a Swagger file for an AWS API Gateway REST API, to an S3 bucket. The command returns a copy of your template, replacing references to local artifacts with the S3 location where the command uploaded the artifacts. After you package your template's artifacts, run the deploy command to deploy the returned template. See: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html

 

Question 45:

You would like to deploy an AWS lambda function using the AWS CLI.  Before deploying what needs to be done? 

  • ​Create a role for the AWS CLI with lambda permissions 
  • ​Package the local artefacts to S3 using cloudformation package CLI command    v
  • ​Package the local artefacts to Lambda using cloudformation package CLI command 
  • ​Package the local artefacts to SAM using sam package CLI command 

 

In API Gateway, an API's method request can take a payload in a different format from the corresponding integration request payload, as required in the backend. Similarly, the backend may return an integration response payload different from the method response payload, as expected by the frontend. API Gateway lets you use mapping templates to map the payload from a method request to the corresponding integration request and from an integration response to the corresponding method response. See: https://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html

 

Question 46:

You would like to use Amazon API gateway to interface with an existing SOAP/XML backend.  API Gateway will receive requests and forward them to the SOAP backend. How can you achieve this? 

  • ​Use API Gateway mapping templates to transform the data for the SOAP backend    v
  • ​Use API Gateway data translation to transform the data for the SOAP backend 
  • ​Use a Lambda function to transform the data for the SOAP backend 
  • ​Use an EC2 instance with a load balancer to transform the data for the SOAP backend. 

 

A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. See: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html

 

Question 47:

A single DynamoDB BatchGetItem request can retrieve up to 16 MB of data, which can contain as many as 25 items.

  • ​True
  • ​False    v

 

BucketAlreadyExists BucketNotEmpty http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#RESTErrorResponses

 

Question 48:

The following error codes would have a HTTP Status Code 409

  AccessDenied
  BucketAlreadyExists
  BucketNotEmpty
  IncompleteBody

  • ​True
  • ​False    v

 

A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. A key can have more than one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. See: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html

 

Question 49:

You can use tags to organize your AWS bill to reflect your own cost structure.

  • ​True      v
  • ​False

 

Scan operations proceed sequentially; however, for faster performance on a large table or secondary index, applications can request a parallel Scan operation by providing the Segment and TotalSegments parameters A parallel scan with a large number of workers can easily consume all of the provisioned throughput for the table or index being scanned. It is best to avoid such scans if the table or index is also incurring heavy read or write activity from other applications. To control the amount of data returned per request, use the Limit parameter. This can help prevent situations where one worke

 

Question 50:

You would like to increase the throughput of a table scan but still leave capacity for the day to day workload. How would you do this? 

  • ​use a sequential scan with rate-limit parameter. 
  • ​use a parallel scan with rate-limit parameter       v
  • ​use a query scan with rate-limit parameter 
  • ​Increase read capacity on a schedule. 

 

100x2=200, 4kb (rounded up) read units required http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ProvisionedThroughput.html

 

Question 51:

Your items are 6KB in size and you want to have 100 strongly consistent reads per second. How many DynamoDB read capacity units do you need to provision?

  • ​100
  • ​200   v
  • ​300
  • ​600

 

See: http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

 

Question 52:

If you anticipate that your S3 workload will consistently exceed 100 PUT/LIST/DELETE requests per second or more than 300 GET requests per second, you should avoid sequential key names

  • ​True      v
  • ​False

 

You can use the CreateQueue action to create a delay queue by setting the DelaySeconds attribute to any value between 0 and 900 (15 minutes). You can also change an existing queue into a delay queue using the SetQueueAttributes action to set the queue's DelaySeconds attribute. See: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html

 

Question 53:

You can use CreateQueue to create an SQS delay queue by setting the DelaySeconds attribute to any value between 0 and 900 (15 minutes).

  • ​True      v
  • ​False

 

IAM users must explicitly be given permissions to administer users or credentials for themselves or for other IAM users. See: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_delegate-permissions.html

 

Question 54:

IAM users do not need to be explicitly given permissions to administer credentials for themselves.

  • ​True
  • ​False   v

 

See: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-visibility-timeout.html

 

Question 55:

Each queue starts with a default setting of 30 seconds for the visibility timeout.

  • ​True     v
  • ​False

 

In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. If you're not using an AWS SDK, you should retry original requests that receive server (5xx) or throttling errors. However, client errors (4xx) indicate that you need to revise the request to correct the problem before trying again. If the rate is still being exceeded then contact AWS to increase the limit. See: https://docs.aws.amazon.com/general/latest/gr/api-retries.html

 

Question 56:

You have developed an application that calls the Amazon CloudWatch API. Every now and again your application receives ThrottlingException HTTP Status Code: 400 errors when making GetMetricData calls. How can you fix this problem? 

  • ​Implement exponential backoff algorithm for retries      v
  • ​Use the GetBatchData API call 
  • ​Request a limit increase from AWS 
  • ​Increase CloudWatch IOPS 

 

See: https://docs.aws.amazon.com/codebuild/latest/userguide/getting-started.html#getting-started-build-log

 

Question 57:

Your AWS CodeBuild project keeps failing to compile your code.  How can you identify what is happening? 

  • ​Define a Cloudwatch event in your buildspec.yml file 
  • ​Enable Cloudtrail logging 
  • ​Enable Cloudwatch logs 
  • ​Check the build logs in the CodeBuild console       v

 

You can work with tags using the AWS Management Console, the AWS CLI, and the Amazon EC2 API. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

 

Question 58:

You can work with tags using the AWS Management Console, the Amazon EC2 command line interface (CLI), and the Amazon EC2 API.

  • ​True     v
  • ​False

 

The demand is not continuous so it is best to back off and try again. If the demand was continuous then you would look at increasing capacity. See: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.MessagesAndCodes See: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff

 

Question 59:

You have a dynamodb table that keeps reporting many failed requests with a ProvisionedThroughputExceededException in Cloudwatch. The requests are not continuous but a number of times during the day for a few seconds.  What is the best solution for reducing the errors?

  • ​create a cloudwatch alarm to retry the failed request 
  • ​Implement exponential backoff and retry         v
  • ​Increase the provision capacity of the dynamodb table 
  • ​implement a secondary index 

 

See: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html

 

Question 60:

__________________ returns the approximate number of SQS messages that are not timed-out and not deleted. 

  • ​NumberOfMessagesNotVisible
  • ​ApproximateNumberOfMessagesNotVisible        v
  • ​ApproximateNumberOfMessages
  • ​ApproximateNumberOfMessagesVisible
  • ​None of the above

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

반응형


반응형

https://free-braindumps.com/amazon/free-aws-certified-cloud-practitioner-braindumps.html?p=1

 

Free AWS-Certified-Cloud-Practitioner braindumps download AWS-Certified-Cloud-Practitioner braindump Free

QUESTION: 1 What is the term used to describe giving an AWS user only access to the exact services he/she needs to do the required job and nothing more? A. The Least Privilege User Principal B. The Principal of Least Privilege C. The Only Access Principal.

free-braindumps.com

 

 

 

 

QUESTION: 1
What is the term used to describe giving an AWS user only access to the exact services he/she
needs to do the required job and nothing more?

A. The Least Privilege User Principal
B. The Principal of Least Privilege
C. The Only Access Principal.
D. None of the above

Answer(s): B

QUESTION: 2
What you create and S3 bucket, what rules must be followed regarding the bucket name?
(Choose two)

A. Bucket names must be unique across all of AWS.
B. Bucket names must be between 3-63 characters in length.
C. Bucket names must contain at least one uppercase letter
D. Bucket names can be formatted as IP addresses

Answer(s): A, B
Explanation:
Although certain regions do allow for uppercase letters in the bucket name, uppercase letters
are NOT required. Also, a bucket name cannot be formatted as an IP address.

QUESTION: 3
What are the main benefits of On-Demand EC2 instances? (Choose two)

A. They are the cheapest buying option.
B. They are the most filexible buying option.
C. They require 1-2 days for setup and configuration.
D. Create, start, stop, and terminate at any time.

Answer(s): B, D
Explanation:
On-demand EC2 instances are widely used due to their filexibility. You can create, start, stop,
and terminate at any time (with no startup or termination fees). Although due to this filexibility,
they are the most expensive buying option.

QUESTION: 4
What AWS service must you use if you want to configure an AWS bil ing alarm?

A. CloudWatch
B. CloudMonitor
C. Consolidated bil ing
D. CloudTrail

Answer(s): A

Explanation:
CloudWatch is the AWS service that allows you to collect metrics, and create alarms based on
those metrics. Bil ing metrics can be tracked in CloudWatch, therefore bil ing alarms can be
created.

QUESTION: 5
What are some common uses of AWS? (Choose four)

A. Networking
B. Analytics
C. Storage
D. Virtualization

Answer(s): A, B, C, D
Explanation:
Al of the answers are common uses of AWS. AWS has thousands of different uses. In this
course we discussed some of the major categories, including: Storage Compute Power
Databases Networking Analytics Developer Tools Virtualization Security

QUESTION: 6
How much data can you store in S3?

A. Storage capacity is virtual y unlimited.
B. You can store up to 1 petabyte of data.
C. Each account is given 50 gigabytes of storage capacity and no more can be used.
D. You can store up to 1 petabyte of data, then you are required to pay an additional fee.

Answer(s): A
Explanation:
Although there is theoretically a capacity limit, as an S3 user, there is no limited on the amount
of data you can store in S3.

QUESTION: 7
You have just set up a brand new AWS account. You want to keep monthly bil ing under $100,
but you are worried about going over that limit. What can you set up in order to be notified when
the monthly bil approaches $100?

A. A CloudTrail bil ing alarm that triggers an SNS notification to your email address.
B. A SNS bil ing alarm that triggers a CloudWatch notification to your email address.
C. A CloudWatch bil ing alarm that triggers an SNS notification to your email address.
D. A CloudWatch bil ing alarm that triggers a CloudTrail notification to your email address.

Answer(s): C
Explanation:
In CloudWatch, you can set up a bil ing alarm that will trigger when your monthly bil hit the set
threshold. That alarm can then be set up to trigger an SNS topic that wil send you a notification
that the alarm threshold as been met.

 

QUESTION: 8
What best describes the purpose of having many Availability Zones in each AWS region?

A. Multiple Availability Zones allow for fault tolerance but not high availability.
B. Multiple Availability Zones allow for cheaper prices due to competition between them.
C. Multiple Availability Zones allow for duplicate and redundant compute, and data backups.
D. None of the above.

Answer(s): C
Explanation:
Availability Zones work together within a region to provide users with the ability to easily setup
and configure redundant architecture and backup solutions

QUESTION: 9
What TWO services/features are required to have highly available and fault tolerant architecture
in AWS? (Choose two)

A. Elastic Load Balancer
B. CloudFront
C. ElastiCache
D. Auto Scaling

Answer(s): A, D

QUESTION: 10
Which S3 storage class has lowest object availability rating?

A. Standard
B. Reduced Redundancy
C. Infrequent Access
D. Al of them have the same availability rating

Answer(s): C
Explanation:
Infrequent access has the lowest availability rating (99.90%). Standard and Reduced
Redundancy have an availability rating of 99.99%

 

 

 



QUESTION: 11
Your company's upper management is getting very nervous about managing governance,
compliance, and risk auditing in AWS. What service should you enable and inform upper
management about?

A. CloudAudit
B. CloudTrail
C. CloudCompliance
D. CloudWatch

 

Answer(s): B
Explanation:
AWS CloudTrail is designed to log all actions taken in your AWS account. This provides a great
resource for governance, compliance, and risk auditing.

QUESTION: 12
The concept of elasticity is most closely associated with which of the following?

A. Auto Scaling
B. Network Security
C. Serverless Computing
D. Elastic Load Balancing

Answer(s): A
Explanation:
Elasticity is the concept that a system can easily (and cost-effectively) both increase in capacity
based on-demand and also shrink in capacity based on-demand. Auto Scaling on AWS is
specifically designed to (automatically) increase and decrease server capacity based on-
demand.

QUESTION: 13
Which of the following wil effect how much you are charged for storing objects in S3? (Choose
two)

A. The storage class used for the objects stored.
B. Encrypting data (objects) stored in S3.
C. Creating and deleting S3 buckets
D. The total size in gigabytes of all objects stored.

Answer(s): A, D

QUESTION: 14
What endpoints are possible to send messages to with Simple Notification Service? (Choose
three)

A. SMS
B. FTP
C. SQS
D. Lambda

Answer(s): A, C, D

QUESTION: 15
What does S3 stand for?

A. Simple Storage Service
B. Simplified Storage Service
C. Simple Store Service
D. Service for Simple Storage

Answer(s): A

QUESTION: 16
Big Cloud Jumbo Corp is beginning to explore migrating their entire on-premises data center to
AWS. They are very concerned about how much it will cost once their entire I.T. infrastructure is
running on AWS. What tool can you recommend so that they can estimate what the cost of
using AWS may be?

A. AWS Estimate Calculator
B. AWS TCO Calculator
C. AWS Cost Explorer
D. AWS Migration Cost Calculator

Answer(s): B
Explanation:
The AWS TCO (Total Cost of Ownership) Calculator is a free tool provided by AWS. It allows
you to compare your current on-premises cost vs. estimated AWS cost.

QUESTION: 17
Kunal is managing an application running on an on-premises data center. What best describes
the challenges he faces that someone using the AWS cloud does not?

A. Kunal must research what size (compute capacity) servers he needs to run his application.
B. Kunal must know how to properly configure network level security.
C. Kunal must predict future growth, and scaling can be costly and time consuming.
D. None of the above.

Answer(s): C
Explanation:
Scaling is much faster and cost-effecting on the AWS cloud. With on-demand instances and
autoscaling, future growth does not have to be predicted. More compute capacity can be added
gradual y as demand increases.

QUESTION: 18
What AWS storage class should be used for long-term, archival storage?

A. Glacier
B. Long-Term
C. Standard
D. Infrequent Access

Answer(s): A
Explanation:
Glacier should be used for (and is specifically designed for) long-term, archival storage.

QUESTION: 19
Kim is managing a web application running on the AWS cloud. The application is currently
utilizing eight EC2 servers for its compute platform. Earlier today, two of those web servers
crashed; however, none of her customer were effected. What has Kim done correctly in this
scenario?

A. Properly built an elastic system.
B. Properly built a scalable system
C. Properly build a fault tolerant system.
D. None of the above.

Answer(s): C
Explanation:
A fault tolerant system is one that can sustain a certain amount of failure while stil remaining
operational.

QUESTION: 20
What are the benefits of DynamoDB? (Choose three)

A. Supports multiple known NoSQL database engines like MariaDB and Oracle NoSQL.
B. Automatic scaling of throughput capacity.
C. Single-digit mil isecond latency.
D. Supports both document and key-value store data models.

Answer(s): B, C, D
Explanation:
DynamoDB does not use/support other NoSQL database engines. You only have access to use
DynamoDB's built-in engine.

QUESTION: 21
What best describes penetration testing?

A. Testing your applications ability to penetrate other applications.
B. Testing your IAM users access to AWS services.
C. Testing your own network/application for vulnerabilities.
D. None of the above.

Answer(s): C

QUESTION: 22
Why would a company decide to use AWS over an on-premises data center? (Choose four)

A. Highly available infrastructure
B. Elastic resources based on demand
C. No upfront cost
D. Cost-effective

Answer(s): A, B, C, D
Explanation:
Al four answers listed are reasons why a company may decide to use AWS over an on-
premises data center.

QUESTION: 23
You are trying to organize and import (to AWS) gigabytes of data that are currently structured in
JSON-like, name-value documents. What AWS service would best fit your needs?

A. Lambda
B. Aurora
C. RDS
D. DynamoDB

Answer(s): D
Explanation:
DynamoDB is AWS's NoSQL database offering. NoSQL databases are for non-structured data
that are typically stored in JSON-like, name-value documents.

QUESTION: 24
What best describes what AWS is?

A. AWS is an online retailer
B. AWS is the cloud.
C. AWS is a cloud services provider.
D. None of the above.

Answer(s): C

QUESTION: 25
What is one benefit AND one drawback of buying a reserved EC2 instance? (Select two)

A. You can terminate the instance at any time without any further pricing commitment.
B. Reserved instances can be purchased as a significant discount over on-demand instances.
C. You can potentially save a lot of money by placing a lower "bid" price.
D. You are locked in to either a one- or three-year pricing commitment.

Answer(s): B, D
Explanation:
Reserved instances require a one- or three-year purchase term, so you are committing to
paying for that much compute capacity for that full time period. However, in exchange for the
long-term commitment, you will receive a discount (of up to 75%) over using an on-demand
instance (for that same time period).

QUESTION: 26
Before moving and/or storing object in AWS Glacier, what considerations should you make
regarding the data you want to store.


A. Make sure the data is properly formatted for storage Glacier.
B. Make sure the total amount of data you want to store in under 1 terabyte in size.
C. Make sure you are ok with it taking at minimum a few minutes to retrieve the data once
stored in Glacier.
D. None of the above.

Answer(s): C
Explanation:
Objects stored in Glacier take time to retrieve. You can pay for expedited retrieval, which will
take several minutes - OR wait several hours (for normal retrieval).

QUESTION: 27
John is working with a large data set, and he needs to import it into a relational database
service. What AWS service will meet his needs?

A. RDS
B. Redshift
C. NoSQL
D. DynamoDB

Answer(s): A
Explanation:
RDS is AWS's relational database service.

QUESTION: 28
Jeff is building a web application on AWS. He wants to make sure his application is highly
available to his customers. What infrastructure components of the AWS cloud allow Jeff to
accomplish this goal? (Choose two)

A. Availability Zones
B. Regional Zones
C. Regions
D. Data Locations

Answer(s): A, C
Explanation:
As part of AWS' global infrastructure, Regions and Availability Zones allow for backups and
duplicate components to be placed in separate (isolated) areas of the globe. If one
region/Availability Zone were to fail, duplicates in other regions/Availability Zones can be used.

QUESTION: 29
What is AWS's serverless compute service?

A. S3
B. Lambda
C. EC2
D. None of the above

Answer(s): B
Explanation:
AWS has two main compute services, EC2 (server-based) and Lambda (serverless).

QUESTION: 30
Stephen is having issues tracking how much compute capacity his application is using. Ideal y,
he wants to track and have alarms for when CPU utilization goes over 70%. What should
Stephen do to accomplish this?

A. Configure an SNS topic with an alarm threshold set to trigger when CPU utilization is greater
than 70%.
B. Configure a CloudWatch alarm with an alarm threshold set to trigger when CPU utilization is
greater than 70%.
C. Configure a CloudWatch alarm with an alarm threshold set to trigger when CPU utilization is
greater than or equal to 70%.
D. None of the above.

Answer(s): B
Explanation:
The answer is to configure a CloudWatch alarm with an alarm threshold set to trigger when
CPU utilization is greater than 70%. This will display the alarm in "alarm" state when CPU
utilization is greater than 70%. This question has been worded very specifically with the works
"goes above 70%". This disqualifies the answer that stated "great than or equal to 70%". The
AWS exam wil have very tricky questions like this.

 

 



QUESTION: 31
What is the availability and durability rating of S3 Standard Storage Class?

A. 99.999999999% Durability and 99.99% Availability
B. 99.999999999% Availability and 99.90% Durability
C. 99.999999999% Availability and 99.99% Durability
D. 99.999999999% Durability and 99.00% Availability

Answer(s): A
Explanation:
S3 Standard Storage class has a rating of 99.999999999% durability (referred to as 11 nines)
and 99.99% availability.

QUESTION: 32
If you want to easily share a file with a friend, family or coworker, what AWS solution should you
use?

A. Mail them a flash drive with the file on it.
B. Create an EC2 instance and give provide login credentials so others can access the file.
C. Upload the object to S3 and share it via its object's S3 public object URL.
D. None of the above.

Answer(s): C
Explanation:
You can easily share objects uploaded into S3 by provided others with the object's URL.

QUESTION: 33
S3 storage classes are rated by what two metric categories? (Select two)

A. Objectivity
B. Durability
C. Availability
D. Fault tolerance

Answer(s): B, C
Explanation:
Each S3 storage class is rated on its availability and durability.

QUESTION: 34
If an object is stored in the Standard S3 storage class and you want to move it to Glacier, what
must you do in order to properly migrate it?

A. Delete the object and reupload it, selecting Glacier as the storage class.
B. Create a lifecycle policy that wil migrate it after a minimum of 30 days.
C. Change the storage class directly on the object.
D. None of the above.

Answer(s): B
Explanation:
Any object uploaded to S3 must first be placed into either the Standard, Reduced Redundancy,
or Infrequent Access storage class. Once in S3 the only way to move the object to glacier is
through a lifecycle policy.

QUESTION: 35
What is the most common type of storage used for EC2 instances?

A. Elastic File System (EFS)
B. EC2 Hard Drives
C. Elastic Block Store (EBS)
D. Magnetic Drive (MD)

Answer(s): C
Explanation:
EC2 instance have several different hard drive options. However, Elastic Block Store (EBS),
which is a type of Network Attached Storage, is the most popular and widely used.

QUESTION: 36
What AWS service has built-in DDoS mitigation?

A.      CloudFront
B. CloudTrail
C. CloudWatch
D. EC2

Answer(s): A
Explanation:
With CloudFront, you cache content at Edge Locations, which shield your underlining
application infrastructure from DDoS attacks.

QUESTION: 37
You have been tasked by your department head to upload a batch of files to an S3 bucket;
however, when you select S3 on the AWS console, you see a notification stating that you do not
have permission to access S3. What is the most probable cause of this error?

A. It takes 24 hours go get access to S3.
B. The S3 service is currently down for maintenance.
C. You do not have an S3 access policy attached to your IAM user.
D. Your boss has not enabled proper bucket permissions.

Answer(s): C
Explanation:
If you get an error stating that you do not have proper permissions to access/use and AWS
service, then most likely your IAM user does not have the proper permission policy attached.

QUESTION: 38
What are the benefits of AWS's Relational Database Service (RDS)? (Choose three)

A. Resizable capacity
B. Automated patches and backups
C. Cost-efficient
D. None of the above

Answer(s): A, B, C

QUESTION: 39
Thomas is managing the access rights and credentials for all the employees that have access to
his company's AWS account. This morning, his was notified that some of these accounts may
have been compromised, and he now needs to change the password policy and re-generate a
new password for all users. What AWS service does Thomas need to use in order to
accomplish this?

A. Policy and Access Management
B. Elastic Cloud Compute
C. Access Management
D. None of the above.

Answer(s): D

 

Explanation:
Identity and Access Management (IAM) is the AWS service where password policies and user
credentials are managed. (Policy and Access Management as a service does not exist).

QUESTION: 40
What are the primary benefits of using Lambda? (Choose two)

A. Pay for only the compute time you consume.
B. Wide variety of operating systems to select from.
C. Actively select and manage instance type and capacity.
D. Run code without provisioning servers.

Answer(s): A, D
Explanation:
Lambda, being AWS's serverless compute platform, means there are no servers, instance
types, or capacity to select. That is all managed for you. With Lambda, you only for the when
your code is actual y being executed.

QUESTION: 41
If you have a set of frequently accessed files that are used on a daily basis, what S3 storage
class should you store them in?

A. Infrequent Access
B. Reduced Redundancy
C. Standard
D. Fast Access

Answer(s): C
Explanation:
The Standard storage class should be used for files that you access on a daily or very frequent
basis.

QUESTION: 42
Which of the following wil effect price you pay for an EC2 instance? (Choose three)

A. Instance Type.
B. Selected Storage Class
C. How long you use the instance for.
D. Amazon Machine Image (AMI).

Answer(s): A, C, D
Explanation:
EC2 instance pricing various depending on many variables. 1) The type of buying option 2)
Selected Ami 3) Selected instance type 4) Region 5) Data in/out 6) Storage capacity

QUESTION: 43

If you want in-depth details on how to create, manage, and attach IAM access policies to IAM
users, in what AWS resource should you look?

A. AWS How-To-Help Section
B. AWS Service Documentation
C. AWS Whitepapers
D. None of the above

Answer(s): B
Explanation:
AWS Service documentation is a collection of documents specific to each AWS service. They
contain detailed how-to's, as well as technical walkthroughs and specifications.

QUESTION: 44
You notice that five of your 10 S3 buckets are no longer available in your account, and you
assume that they have been deleted. You are unsure who may have deleted them, and no one
is taking responsibility. What should you do to investigate and find out who deleted the S3
buckets?

A. Look at the S3 logs.
B. Look at the CloudTrail logs.
C. Look at the CloudWatch Logs.
D. Look at the SNS logs.

Answer(s): B
Explanation:
CloudTrail is logging service that logs actions taken by AWS users in your AWS account, such
as creating/deleting S3 buckets, starting/stopping EC2 stances, etc.

QUESTION: 45
What acts as an address (like a mailing address) for a web server located on a network?

A. DNS Server
B. IP Address
C. Common language domain name
D. None of the above

Answer(s): B
Explanation:
An IP address is a severs address on a network. It is how traffic/request get routed to it (much
like a piece of mail gets routed to your home).

 

 

 

 



QUESTION: 46
What services has built-in DDoS mitigation and/or protection?

A. EC2
B. RDS
C. SNS
D. None of the above

Answer(s): D
Explanation:
AWS services with built-in DDoS migigation/protection include: 1) Route 53 2) CloudFront 3)
WAF (web application firewal ) 4) Elastic Load Balancing 5) VPCs and Security Groups

QUESTION: 47
What should you do if you believe your AWS account has been compromised? (Choose four)

A. Delete any resources in your account that you did not create.
B. Respond to any notifications you received from AWS through the AWS Support Center.
C. Change all IAM user's passwords.
D. Delete or rotate all programatic (API) access keys.

Answer(s): A, B, C, D
Explanation:
Al these answers are actions you should take if you believe you account has been
compromised.

QUESTION: 48
Under what circumstances would someone want to use ElastiCache? (Choose two)

A. They need a NoSQL database option
B. They need to use Edge Locations to cache content
C. The need improved improve the performance of their web application.
D. They need in-memory data store service.

Answer(s): C, D
Explanation:
ElastiCache is used as an in-memory data store or cache in the cloud. Benefits include
improved performance for web applications (that rely on information stored in a database). Edge
Locations are used for caching content with the CloudFront service, so that is not a answer
here.

QUESTION: 49
Derek is running a web application and is noticing that he is paying for way more server
capacity then is required. What AWS feature should Derek set up and configure to ensure that
his application is automatically adding/removing server capacity to keep in line with the required
demand?

A. Auto Scaling
B. Elastic Server Scaling
C. Elastic Load Balancing
D. Auto Sizing

Answer(s): A
Explanation:

Auto scaling is the feature that automated the process of adding/removing server capacity from
a system (based on usage demand). Auto scaling creates a very cost effective system by never
having too much or too little server capacity.

QUESTION: 50
What AWS service uses Edge Locations for content caching?

A. ElastiCache
B. Route 53
C. CloudFront
D. CloudCache

Answer(s): C
Explanation:
CloudFront is a content caching service provided by AWS that utilizes "Edge Locations," which
are AWS data centers located all around the world.

QUESTION: 51
What is the purpose of AWS's Route 53 service? (Choose two)

A. Content Caching
B. Database Management
C. Domain Registration
D. Domain Name System (DNS) service

Answer(s): C, D
Explanation:
Route 53 is AWS's domain and DNS management service. You can use it to register new
domain names, as well as manage DNS record sets.

QUESTION: 52
What are the benefits of AWS Organizations?
(Choose two)

A. Analyze cost across al multiple AWS accounts.
B. Automate AWS account creation and management.
C. Centrally manage access polices across multiple AWS accounts.
D. None of the above.

Answer(s): B, C
Explanation:
AWS Organizations has four main benefits: 1) Centrally manage access polices across multiple
AWS accounts. 2) Automate AWS account creation and management. 3) Control access to
AWS services 4) Enable consolidated bil ing across multiple AWS accounts Analyzing cost is
done through the Cost Explorer (or TCO calculator), which is not part of AWS Organizations.

QUESTION: 53

What AWS service allows you to have your own private network in the AWS cloud?

A. Virtual Private Network (VPN)
B. Virtual Private Cloud (VPC)
C. Virtual Cloud Network (VCN)
D. None of the above.

Answer(s): B
Explanation:
A Virtual Private Cloud (VPC) is a private sub-section of AWS that is your own private network.
You control what resources you place inside the VPC and the security features around it.

QUESTION: 54
If you are using an on-demand EC2 instance, how are you being charged for it?

A. You are charged per second, based on an hourly rate, and there are no termination fees.
B. You are charged by the hour and must pay a partial upfront fee.
C. You must commit to a one or three year term and pay upfront.
D. You are charged per second, based on an hourly rate, and there is a termination fee.

Answer(s): A
Explanation:
On-demand EC2 instances are exactly that, on-demand. There are no upfront or termination
fees, and you are charged for each second of usage (based on an hourly rate).

QUESTION: 55
Matt is working on a projects that involves converting an images format from .png to .jpg.
Thousands of images have to be converted; however, time is not real y an issue and continual
processing is not required. What type of EC2 buying option would be most cost-effective for
Matt to use?

A. Spot
B. On-demand
C. Reserved
D. None of the above

Answer(s): A
Explanation:
Spot instances offer the cheapest option of all EC2's buying options. However, spot instances
should only be used when there can be interruptions in the processing jobs being conducted.
This is due to the fluctuation in spot pricing. If the spot price goes above your bid price, then you
wil lose access to the spot instance (thus causing a stoppage in processing).

 

What AWS service allows you to have your own private network in the AWS cloud?

A. Virtual Private Network (VPN)
B. Virtual Private Cloud (VPC)
C. Virtual Cloud Network (VCN)
D. None of the above.

Answer(s): B
Explanation:
A Virtual Private Cloud (VPC) is a private sub-section of AWS that is your own private network.
You control what resources you place inside the VPC and the security features around it.

QUESTION: 54
If you are using an on-demand EC2 instance, how are you being charged for it?

A. You are charged per second, based on an hourly rate, and there are no termination fees.
B. You are charged by the hour and must pay a partial upfront fee.
C. You must commit to a one or three year term and pay upfront.
D. You are charged per second, based on an hourly rate, and there is a termination fee.

Answer(s): A
Explanation:
On-demand EC2 instances are exactly that, on-demand. There are no upfront or termination
fees, and you are charged for each second of usage (based on an hourly rate).

QUESTION: 55
Matt is working on a projects that involves converting an images format from .png to .jpg.
Thousands of images have to be converted; however, time is not real y an issue and continual
processing is not required. What type of EC2 buying option would be most cost-effective for
Matt to use?

A. Spot
B. On-demand
C. Reserved
D. None of the above

Answer(s): A
Explanation:
Spot instances offer the cheapest option of all EC2's buying options. However, spot instances
should only be used when there can be interruptions in the processing jobs being conducted.
This is due to the fluctuation in spot pricing. If the spot price goes above your bid price, then you
wil lose access to the spot instance (thus causing a stoppage in processing).

QUESTION: 56
David is managing a web application running on dozens of EC2 servers. He is worried that if
something goes wrong with one of the servers he wil not know about it in a timely manner.
What solution could you offer to help him keep updated on the status of his servers?

 

A. Configure each EC2 instance with a custom script to email David when any issues occur.
B. Configure RDS notifications based on CloudWatch EC2 metric alarms.
C. Enable CloudTrail to log and report any issues that occur with the EC2 instances.
D. Configure SNS notifications based on CloudWatch EC2 metric alarms.

Answer(s): D
Explanation:
CloudWatch is used to track metrics on all EC2 instances. Metric alarms can be configured to
trigger SNS messages if something goes wrong.

QUESTION: 57
What AWS database is primarily used to analyze data using standard SQL formatting with
compatibility for your existing business intelligence tools?

A. ElastiCache
B. DynamoDB
C. Redshift
D. RDS

Answer(s): C
Explanation:
Redshift is a database offering that is fully-managed and used for data warehousing and
analytics, including compatibility with existing business intelligence tools.

QUESTION: 58
Tracy has created a web application, placing it's underlining infrastructure in the N. Virginia (US-
East-1) region. After several months, Tracy notices that much of the traffic coming to her
website is coming from Japan. What can Tracy do to (best) help reduce latency for her users in
Japan?

A. Copy the current VPC and located in US-East-1 and ask AWS to move it to a region closest
to Japan
B. Create a and manage a complete duplicate copy of the web application and its infrastructure
in a region closest to Japan.
C. Create a CDN using CloudFront, making sure the proper content is cached at Edge
Locations closest to Japan.
D. Create a CDN using CloudCache, making sure the proper content is cached at Edge
Locations closest to Japan.

Answer(s): C
Explanation:
CloudFront is AWS's content delivery network (CDN) service. You can use it to cache web
content at edge locations what are closest to you customers. This will decrease latency for the
customer and improve overall performance.

QUESTION: 59
What AWS service help you estimate the cost of using AWS vs. an on-premises data center?
A. Cost Explorer
B. Consolidated Bil ing
C. TCO Calculator
D. None of the above

Answer(s): C
Explanation:
The TCO (total cost of ownership) calculator helps you estimate the cost of using AWS vs. an
onpremises data center.

QUESTION: 60
What AWS feature acts as a traffic distribution regulator, making sure each EC2 instance in a
system get the same amount of traffic?

A. Availability Zone
B. ELB
C. NACL
D. Auto Scaling

Answer(s): B
Explanation:
An Elastic Load Balancer is responsible for evenly distributing incoming web traffic between all
the EC2 instances associated with it. This help prevent one server from becoming overloaded
with traffic, while another server remains underutilized.

QUESTION: 61
What best describes the concept of fault tolerance?

A. The ability for a system to withstand a certain amount of failure and stil remain functional.
B. The ability for a system to grow and shrink based on demand.
C. The ability for a system to grow in size, capacity, and/or scope.
D. The ability for a system be accessible when you attempt to access it.

Answer(s): A
Explanation:
Fault tolerance describes the concept of a system (in our case a web application) to have failure
in some of its components and stil remain accessible (highly available). Fault tolerant web
applications wil have at least two web servers (in case one fails).

QUESTION: 62
What best describes Amazon Web Services (AWS)?

A. AWS only provides compute and storage services.
B. AWS is the cloud.
C. AWS is a cloud services provider.
D. None of the above.

Answer(s): C

Explanation:
AWS is defined as a cloud services provider. They provide hundreds of services of which
compute and storage are included (not limited to).

QUESTION: 63
What are the four primary benefits of using the cloud/AWS?

A. Elasticity, scalability, easy access, limited storage.
B. Fault tolerance, scalability, elasticity, and high availability.
C. Unlimited storage, limited compute capacity, fault tolerance, and high availability.
D. Fault tolerance, scalability, sometimes available, unlimited storage

Answer(s): B
Explanation:
Fault tolerance, scalability, elasticity, and high availability are the four primary benefits of
AWS/the cloud.

QUESTION: 64
What best describes an AWS region?

A. A specific location where an AWS data center is located.
B. An isolated collection of AWS Availability Zones, of which there are many placed all around
the world.
C. The physical networking connections between Availability Zones.
D. A collection of DNS servers.

Answer(s): B
Explanation:
An AWS region is an isolated geographical area that is is comprised of three or more AWS
Availability Zones.

QUESTION: 65
What best describes a simplified definition of the "cloud"?

A. Al the computers in your local home network.
B. A computer located somewhere else that you are utilizing in some capacity.
C. An on-premises data center that your company owns.
D. Your internet service provider

Answer(s): B
Explanation:
The simplest definition of the cloud is a computer that is located somewhere else that you are
utilizing in some capacity. AWS is a cloud services provider, as the provide access to computers
they own (located at AWS data centers), that you use for various purposes.

QUESTION: 66
What is the purpose of a DNS server?

A. To serve web application content.
B. To convert common language domain names to IP addresses.
C. To convert IP addresses to common language domain names.
D. To act as an internet search engine.

Answer(s): B
Explanation:
Domain name system servers act as a "third party" that provides the service of converting
common language domain names to IP addresses (which are required for a web browser to
properly make a request for web content).

QUESTION: 67
What best describes the concept of high availability?

A. The ability for a system to grow and shrink based on demand.
B. The ability for a system to withstand a certain amount of failure and stil remain functional.
C. The ability for a system to grow in size, capacity, and/or scope.
D. The ability for a system be accessible when you attempt to access it.

Answer(s): D
Explanation:
High availability refers to the concept that something wil be accessible when you try to access
it. An object or web application is "highly available" when it is accessible a vast majority of the
time.

QUESTION: 68
What best describes the concept of scalability?

A. The ability for a system to withstand a certain amount of failure and stil remain functional.
B. The ability for a system to grow in size, capacity, and/or scope.
C. The ability for a system to grow and shrink based on demand.
D. The ability for a system be accessible when you attempt to access it.

Answer(s): B
Explanation:
Scalability refers to the concept of a system being able to easily (and cost-effectively) scale UP.
For web applications, this means the ability to easily add server capacity when demand
requires.

QUESTION: 69
What best describes the concept of elasticity?

A. The ability for a system to grow in size, capacity, and/or scope.
B. The ability for a system to withstand a certain amount of failure and stil remain functional.
C. The ability for a system to grow and shrink based on demand.
D. The ability for a system be accessible when you attempt to access it.

 

반응형


반응형

Great Udemy course for AWS Certifications

 

Amazon Web Services (AWS) Certified - 4 Certifications!

 

 

 

Quiz 1: Practice Exam - AWS Certified Cloud Practitioner

 

See: awstcocalculator.com

 

RDS can only scale manually.

 

FIFO queues are designed to ensure that the order in which messages are sent and received is strictly preserved and that each message is processed exactly once. See https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html

 

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html

Amazon SQS FIFO (First-In-First-Out) Queues

docs.aws.amazon.com

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

 

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

 

docs.aws.amazon.com

A stack is a collection of AWS resources that you can manage as a single unit. In other words, you can create, update, or delete a collection of resources by creating, updating, or deleting stacks. All the resources in a stack are defined by the stack's AWS CloudFormation template.

 

Root access should not be used and should have MFA enabled

 

Amazon Simple Queue Service (Amazon SQS) offers a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. It moves data between distributed application components and helps you decouple these components.

 

SNS can send push notifications See: https://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html

 

https://docs.aws.amazon.com/sns/latest/dg/sns-mobile-application-as-subscriber.html

Using Amazon SNS for User Notifications with a Mobile Application as a Subscriber (Mobile Push)

docs.aws.amazon.com

Rule of thumb: Be a pessimist when designing architectures in the cloud; assume things will fail. In other words, always design, implement and deploy for automated recovery from failure. See: https://media.amazonwebservices.com/AWS_Cloud_Best_Practices.pdf

불러오는 중입니다...

Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. See: https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html

 

https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html

 

docs.aws.amazon.com

See: awstcocalculator.com

불러오는 중입니다...

 

See: https://calculator.s3.amazonaws.com/index.html

 

Amazon Web Services Simple Monthly Calculator

This Calculator provides an estimate of usage charges for AWS services based on certain information you provide. Monthly charges will be based on your actual usage of AWS services, and may vary from the estimates the Calculator has provided. Give us your f

calculator.s3.amazonaws.com

See: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-explorer-what-is.html

 

https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ce-what-is.html

Analyzing Your Costs with Cost Explorer

docs.aws.amazon.com

See: https://aws.amazon.com/ec2/spot/

 

Amazon EC2 스팟 – 온디맨드 요금에서 최대 90% 절약

Amazon EC2 스팟 인스턴스를 사용하면 AWS 클라우드에서 미사용 EC2 용량을 활용할 수 있습니다. 스팟 인스턴스는 온디맨드 요금과 비교하여 최대 90% 할인된 금액으로 제공됩니다. 빅 데이터, 컨테이너식 워크로드, CI/CD, 웹 서버, 고성능 컴퓨팅(HPC), 기타 테스트 및 개발 워크로드 등 다양한 상태 비저장, 내결함성 또는 유연한 애플리케이션에 스팟 인스턴스를 사용할 수 있습니다. 스팟 인스턴스는 Auto Scaling, EMR, ECS,

aws.amazon.com

See: https://aws.amazon.com/premiumsupport/trustedadvisor/

 

Trusted Advisor | 환경 최적화 | AWS Support

Business Support 및 Enterprise Support 고객은 전체 Trusted Advisor 점검 항목 및 권장 사항 세트에 액세스할 수 있습니다. 이를 통해 전체 AWS 인프라를 최적화하여 보안과 성능을 향상하고 전체 비용을 줄이고, 서비스 한도를 모니터링할 수 있습니다. 다음은 추가적인 이점입니다. 알림: 주간 업데이트로 AWS 리소스 배포를 최신 상태로 유지하고 Amazon CloudWatch를 통해 알림을 생성하고 작업을 자동화합니

aws.amazon.com

 

 

 

 

See: https://aws.amazon.com/inspector/

 

Amazon Inspector - Amazon Web Services(AWS)

CISSP CapLinked의 수석 인프라 보안 엔지니어인 Chen은 "CapLinked에서는 안전한 클라우드 기반 협업 플랫폼을 통해 인수, 자본 조달, 감사 및 기타 복잡한 비즈니스 트랜잭션과 같은 민감한 금융 트랜잭션을 가속화하는 데 집중합니다."라고 말합니다. "고객 데이터에 대한 보안을 강화하기 위해 우리가 무슨 일을 하는지 고객이 이해하도록 돕는 것은 매우 중요합니다. Amazon Inspector는 클라우드에 최적화되어 있고, 지속적 통합 지

aws.amazon.com

 

If you want the URL for your sign-in page to contain your company name (or other friendly identifier) instead of your AWS account ID, you can create an alias for your AWS account ID. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html

 

https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html

Your AWS Account ID and Its Alias

docs.aws.amazon.com

When you create IAM policies, follow the standard security advice of granting least privilege—that is, granting only the permissions required to perform a task. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege

 

https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege

 

docs.aws.amazon.com

A scaling plan tells Auto Scaling when and how to scale. For example, you can base a scaling plan on the occurrence of specified conditions (dynamic scaling) or on a schedule. An autoscaling policy defines how to scale and how much scaling to be applied. A scaling plan will reference a scaling policy.

 

By default, the “automatic rollback on error” feature is enabled. This will cause all AWS resources that AWS CloudFormation created successfully for a stack up to the point where an error occurred to be deleted. This is useful when, for example, you accidentally exceed your default limit of Elastic IP addresses, or you don’t have access to an EC2 AMI you’re trying to run. This feature enables you to rely on the fact that stacks are either fully created, or not at all, which simplifies system administration and layered solutions built on top of AWS CloudFormation.

 

See: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowCloudFrontWorks.html

 

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowCloudFrontWorks.html

How CloudFront Delivers Content

docs.aws.amazon.com

AWS CloudTrail is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs. See: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

 

https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html

 

docs.aws.amazon.com

By default, Amazon EC2 sends metric data to CloudWatch in 5-minute periods. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch.html

 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch.html

Monitoring Your Instances Using CloudWatch

docs.aws.amazon.com

 

See: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

 

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

What Is Amazon CloudWatch Logs?

docs.aws.amazon.com

A database is a collection of Tables. An Table is a collection of items and each item is a collection of attributes.

 

You connect to your EC2 Windows instance using RDP. You connect to your EC2 Linux instance using SSH.

 

An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

Amazon EC2 Instance Store

docs.aws.amazon.com

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html

 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html

 

docs.aws.amazon.com

See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html

 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html

Amazon EC2 Security Groups for Linux Instances

docs.aws.amazon.com

Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. By using these frameworks and related open-source projects, such as Apache Hive and Apache Pig, you can process data for analytics purposes and business intelligence workloads. See: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-what-is-emr.html

 

https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-what-is-emr.html

 

docs.aws.amazon.com

 

 

 

 

See: https://aws.amazon.com/blogs/aws/archive-s3-to-glacier/

 

Archiving Amazon S3 Data to Amazon Glacier | Amazon Web Services

AWS provides you with a number of data storage options. Today I would like to focus on Amazon S3 and Amazon Glacier and a new and powerful way for you to use both of them together. Both of the services offer dependable and highly durable storage for the In

aws.amazon.com

IAM is a feature of your AWS account offered at no additional charge. You will be charged only for use of other AWS services by your users.

 

An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

docs.aws.amazon.com

You can and should use an IAM role to manage temporary credentials for applications that run on an EC2 instance. When you use a role, you don't have to distribute long-term credentials (such as a user name and password or access keys) to an EC2 instance. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html

 

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html

Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances

docs.aws.amazon.com

Amazon Kinesis Data Streams enables you to build custom applications that process or analyze streaming data for specialized needs. Kinesis Data Streams can continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as website clickstreams, financial transactions, social media feeds, IT logs, and location-tracking events.

 

 

See: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

 

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html

High Availability (Multi-AZ) for Amazon RDS

docs.aws.amazon.com

When Multi-AZ is enabled on RDS, the standby replica instance will be located in a different availability zone.

 

A bucket name must be unique across all existing bucket names in Amazon S3. See: https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html

 

https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html

Bucket Restrictions and Limitations

docs.aws.amazon.com

See: https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html

 

https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html

Amazon S3 Storage Classes

docs.aws.amazon.com

See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html

 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html

Amazon EC2 Security Groups for Linux Instances

docs.aws.amazon.com

 

Network ACLs operate at the subnet level and evaluate traffic entering and exiting a subnet. See: https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

 

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

 

docs.aws.amazon.com

 

See: https://aws.amazon.com/premiumsupport/trustedadvisor/

 

Trusted Advisor | 환경 최적화 | AWS Support

Business Support 및 Enterprise Support 고객은 전체 Trusted Advisor 점검 항목 및 권장 사항 세트에 액세스할 수 있습니다. 이를 통해 전체 AWS 인프라를 최적화하여 보안과 성능을 향상하고 전체 비용을 줄이고, 서비스 한도를 모니터링할 수 있습니다. 다음은 추가적인 이점입니다. 알림: 주간 업데이트로 AWS 리소스 배포를 최신 상태로 유지하고 Amazon CloudWatch를 통해 알림을 생성하고 작업을 자동화합니

aws.amazon.com

 

See: https://aws.amazon.com/ses/

 

Amazon Simple Email Service(SES)

사용량에 따라 지불하고 사용한 만큼만 지불합니다. 선수금, 시간 소모적인 가격 협상, 고정 비용, 최소 요금이 없습니다. 또한 Amazon EC2에 호스팅되어 있는 애플리케이션에서 메시지를 발송할 경우에는 매월 최초 62,000건의 이메일을 무료로 발송할 수 있습니다.

aws.amazon.com

 

 

 

 

Long polling helps reduce your cost of using Amazon SQS by reducing the number of empty responses. You can enable long polling using the AWS Management Console by setting a Receive Message Wait Time to a value greater than 0. See: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-long-polling.html

 

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-short-and-long-polling.html#sqs-long-polling

Amazon SQS Short and Long Polling

docs.aws.amazon.com

When you stop an instance, AWS shut it down. AWS don't charge usage for a stopped instance, or data transfer fees, but do charge for the storage for any Amazon EBS volumes. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html

 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html

Stop and Start Your Instance

docs.aws.amazon.com

You can only stop and restart your instance if it has an Amazon EBS volume as its root device. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html

 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html

Stop and Start Your Instance

docs.aws.amazon.com

AWS Organizations provides consolidated billing so that you can track the combined costs of all the member accounts in your organization. See: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/useconsolidatedbilling-procedure.html

 

https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/useconsolidatedbilling-procedure.html

Consolidated Billing Process

docs.aws.amazon.com

See: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

 

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

What is AWS Elastic Beanstalk?

docs.aws.amazon.com

See: https://aws.amazon.com/compliance/shared-responsibility-model/

 

공동 책임 모델 – Amazon Web Services(AWS)

보안과 규정 준수는 AWS와 고객의 공동 책임입니다. 이 공유 모델은 AWS가 호스트 운영 체제 및 가상화 계층에서 서비스가 운영되는 시설의 물리적 보안에 이르기까지 구성 요소를 운영, 관리 및 제어하므로 고객의 운영 부담을 경감할 수 있습니다. 고객은 게스트 운영 체제(업데이트 및 보안 패치 포함) 및 다른 관련 애플리케이션 소프트웨어를 관리하고 AWS에서 제공한 보안 그룹 방화벽을 구성할 책임이 있습니다. 고객은 서비스를 선택할 때 신중하게 고려해야

aws.amazon.com

Shifting of capital expenditure for physical hardware, to pay-as-you-go operating expenditure.

 

See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html

Regions, Availability Zones, and Local Zones

docs.aws.amazon.com

 

See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions-availability-zones

 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions-availability-zones

Regions, Availability Zones, and Local Zones

docs.aws.amazon.com

 

 

 

 

An EBS volume can be attached to only one instance at a time within the same Availability Zone. However, multiple volumes can be attached to a single instance. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

 

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes.html

 

docs.aws.amazon.com

See: https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html

 

https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html

 

docs.aws.amazon.com

By default, Amazon RDS enables automated backups of your DB Instance with a 7 day retention period. See: https://aws.amazon.com/rds/faqs/

 

Amazon RDS FAQ - Amazon Web Services(AWS)

 

aws.amazon.com

See: https://aws.amazon.com/rds/oracle/faqs/

 

Amazon RDS for Oracle FAQ – AWS(Amazon Web Services)

Amazon RDS는 무료로 사용해 볼 수 있습니다. 사용한 만큼만 비용을 지불합니다. 최소 요금이 없습니다.  

aws.amazon.com

You cannot access the underlying operating system of an RDS instance.

 

An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups.html

 

=================================================

https://free-braindumps.com/amazon/free-aws-certified-cloud-practitioner-braindumps.html

 

Free AWS-Certified-Cloud-Practitioner braindumps download AWS-Certified-Cloud-Practitioner braindump Free

QUESTION: 1 What is the term used to describe giving an AWS user only access to the exact services he/she needs to do the required job and nothing more? A. The Least Privilege User Principal B. The Principal of Least Privilege C. The Only Access Principal.

free-braindumps.com

 

https://d1.awsstatic.com/training-and-certification/docs-cloud-practitioner/AWS-Certified-Cloud-Practioner_Sample_Questions_v1.1_FINAL.PDF

불러오는 중입니다...

https://www.passapply.com/online-pdf/aws-certified-cloud-practitioner.pdf

불러오는 중입니다...

 

반응형


반응형

 

 

AWS Cloud Practitioner Essentials (Second Edition) (Korean)

 

AWS training and certification

 

www.aws.training

Second Edition은 이전 강좌와는 다르게 PPT 가 한글로 돼 있고 화면 밑에 한글로 자막도 나옴.

내용은 거의 같고 일부 항목이 조금 자세히 나옴. (예: Amazon Inspector, Amazon Shield 등)

맨 마지막에 30개의 지식 확인 문제가 있음.

 

정보

설명

이 기초 레벨 과정은 특정 기술 역할과 관계없이 AWS 클라우드를 전반적으로 이해하고자 하는 개인을 대상으로 합니다. 클라우드 개념, AWS 서비스, 보안, 아키텍처, 요금 및 지원에 대한 상세한 개요를 제공합니다. 이 과정은 또한 AWS Certified Cloud Practitioner 시험을 준비하는 데 도움이 됩니다.

수강 대상

이 과정의 대상은 다음과 같습니다.

  • 판매
  • 법적 고지
  • 마케팅
  • 비즈니스 애널리스트
  • 프로젝트 관리자
  • AWS 아카데미 학생
  • 기타 IT 관련 전문가

과정 목표

이 과정에서 학습하게 될 내용은 다음과 같습니다.

  • 클라우드가 무엇이고 어떻게 작동하는지 정의
  • 클라우드 컴퓨팅과 배포 모델의 차이를 구분
  • AWS 클라우드 가치 제안 설명
  • 클라우드의 기본 글로벌 인프라를 설명
  • AWS 와 상호 작용하는 다양한 방법을 서로 비교
  • AWS 서비스 도메인의 차이를 설명 및 구분
  • Well-Architected 프레임워크를 설명
  • 기본 AWS 클라우드 아키텍처 원칙 설명
  • 공동 책임 모델을 설명
  • AWS 클라우드를 이용한 보안 서비스를 설명
  • AWS 플랫폼의 결제, 계정 관리, 요금 모델을 정의

수강 전 권장 사항

본 과정을 수강하기 전에 다음 사전 조건을 갖추는 것을 권장합니다.

  • 일반적인 IT 기술 지식
  • 일반적인 IT 비즈니스 지식

강의 형태

이 과정은 다음 방법을 통해 제공됩니다.

  • 디지털 교육

소요 시간

6시간

과정 개요

이 과정에서는 다음 개념을 다룹니다.

  • 클라우드 개념 소개
  • AWS 핵심 서비스
  • AWS 고급형 서비스
  • AWS 아키텍처 설계
  • 보안
  • 요금 및 지원

 

 

 

 

 

 

 

 

 

 

유용한 싸이트

 

http://blog.naver.com/PostView.nhn?blogId=ambidext&logNo=221682330430

 

AWS 자격증 덤프, 학습 사이트

제가 AWS Certified Cloud Practitioner 자격증 취득에 대한 글을 하나 올렸었는데요, https://blog....

blog.naver.com

https://gist.github.com/serithemage/9993400aa483c95ade954a1e36b1004b#comments

 

AWS 학습 자료집

AWS 학습 자료집. GitHub Gist: instantly share code, notes, and snippets.

gist.github.com

 

반응형


반응형

AWS Cloud Practitioner Essentials (Digital) (Korean) - 02

 

AWS training and certification

 

www.aws.training

 

 

 

https://github.com/yoonhok524/aws-certifications/tree/master/0.%20Cloud%20Practitioner

 

yoonhok524/aws-certifications

AWS 자격증 취득을 위한 관련 내용 정리. Contribute to yoonhok524/aws-certifications development by creating an account on GitHub.

github.com

 

- AWS 보안 소개
안녕하세요. 
 저는 Amazon Web Services(AWS) 교육 및 자격증 팀의 Jody Soeiro de Faria입니다. 
 오늘은 AWS 보안에 대해 소개하려고 합니다. 
 AWS는 높은 가용성과 신뢰성을 위해 설계된 확장 가능한 클라우드 컴퓨팅 플랫폼을 제공하며, 이를 통해 고객이 다양한 애플리케이션을 실행할 수 있는 유연성을 제공합니다. 
 고객 시스템과 데이터의 기밀성, 무결성 및 가용성을 보호하도록 지원하는 것은 고객 신뢰와 자신감을 유지하는 것만큼이나 AWS에 최고로 중요한 사안입니다. 
 이 모듈은 AWS 환경의 제어 및 AWS가 보안 목표를 달성하기 위해 고객에게 제공하는 일부 제품 및 기능을 비롯한 AWS의 보안 접근 방식을 소개하기 위한 목적으로 마련되었습니다. 
 모든 AWS 고객은 보안에 가장 민감한 기업의 요구 사항도 충족시키도록 구축된 데이터 센터 및 네트워크 아키텍처를 활용할 수 있습니다. 
 즉, 기존 데이터 센터의 자본 지출 및 운영 비용 없이도 높은 보안성을 위해 설계된 탄력적인 인프라를 얻을 수 있습니다. 
 AWS 인프라는 강력한 보안 조치를 적용하여 고객의 개인 정보를 보호합니다. 
 AWS는 고객의 의견을 AWS 서비스에 지속적으로 반영하면서 대규모로 빠르게 혁신합니다. 
 Identity and Access Management(IAM), 로깅 및 모니터링, 암호화 및 키 관리, 네트워크 세분화, 비용이 거의 또는 전혀 들지 않는 표준 DDoS 차단과 같은 핵심 보안 서비스를 지속적으로 발전시켜 나가는 등 AWS 솔루션은 시간이 지날수록 더 좋아지기 때문에 이러한 지속적인 개선은 고객에게 이익이 됩니다. 
 이와 더불어 전 세계 보안 동향을 심층적으로 이해하고 있는 엔지니어들이 고안한 고급 보안 서비스도 제공하므로 새로 등장하는 위험을 선제적으로 실시간 처리할 수 있습니다. 
 게다가 이용한 서비스만큼의 가격만, 그것도 저렴하게 지불하면 됩니다. 
 다시 말해 선행 투자 없이 사업 성장에 발맞춰 필요한 보안 서비스를 선택할 수 있고, 자체 인프라를 관리하는 것보다 운영 비용이 훨씬 더 저렴합니다. 
 제대로 보안 조치를 취한 환경은 규정에 맞는 환경이 됩니다. 
 규제 대상 워크로드를 AWS 클라우드로 마이그레이션하면 AWS의 다양한 거버넌스 지원 기능을 이용하여 보안 수준을 한층 강화할 수 있습니다. 
 클라우드 기반의 거버넌스는 향상된 관리 기능, 보안 제어 및 중앙 자동화를 통해 낮은 초기 비용, 좀 더 용이한 운영, 향상된 민첩성 등의 이점을 제공합니다. 
 AWS를 선택하면 AWS가 이미 운영 중인 수많은 보안 제어 기능을 물려받을 수 있기 때문에 유지해야 할 보안 제어 기능의 수가 줄어듭니다. 
 자체적인 규정 준수 및 인증 프로그램은 강화되고, 그와 동시에 기업 고유의 보안 보증 요구 사항을 유지 관리하고 운영하는 데 드는 비용은 절감됩니다. 
 AWS와 파트너는 보안 목표를 달성하는 데 도움이 되는 광범위한 도구와 기능을 제공합니다. 
 이러한 도구는 자체 온프레미스 환경에 배포되는 제어 기능을 결합합니다. 
 AWS는 네트워크 보안, 구성 관리, 액세스 제어 및 데이터 보안 전반에 걸쳐 보안 전용 도구 및 기능을 제공합니다. 
 또한 AWS는 모니터링 및 로깅 도구를 통해 고객 환경에서 발생하는 상황에 대한 완전한 가시성을 제공합니다. 
 AWS는 개인 정보 보호 및 네트워크 액세스 제어를 개선하기 위한 여러 보안 기능과 서비스를 제공합니다. 
 여기에는 AWS 내에서 프라이빗 네트워크를 생성할 수 있는 내장 방화벽, 인스턴스 및 서브넷에 대한 네트워크 액세스 제어 기능, 모든 서비스에서 TLS(전송 계층 보안)로 전송되는 암호화, 사무실 또는 온프레미스 환경에서 프라이빗 또는 전용 연결을 활성화하는 연결 옵션, Auto Scaling 또는 콘텐츠 전송 전략의 일부인 DDoS 완화 기술이 포함됩니다. 
 AWS는 클라우드 리소스가 조직의 표준 및 모범 사례를 계속 준수하도록 보장하는 한편 빠르게 이동할 수 있게 지원하는 광범위한 도구를 제공합니다. 
 여기에는 조직 표준에 따라 AWS 리소스의 생성 및 폐기를 유지 관리하기 위한 배포 도구, AWS 리소스를 식별하고 시간 경과에 따라 해당 리소스의 변경 사항을 추적하고 관리하기 위한 인벤토리 및 구성 관리 도구, Amazon EC2 인스턴스에 대해 미리 구성된 강력한 표준 가상 머신(VM)을 생성하기 위한 템플릿 정의 및 관리 도구가 포함됩니다. 
 AWS는 클라우드에 보관된 데이터에 대한 보안 계층을 추가할 수 있는 기능을 통해 확장 가능하고 효율적인 암호화 기능을 제공합니다. 
 여기에는 Amazon EBS, Amazon S3, Amazon Glacier, Oracle RDS, SQL Server RDS 및 Amazon Redshift와 같은 AWS 스토리지 및 데이터베이스 서비스에서 사용 가능한 데이터 암호화 기능, AWS에서 암호화 키를 관리하도록 할지 아니면 직접 키를 완벽하게 제어할지를 선택할 수 있는 유연한 키 관리 옵션, 고객이 규정 준수 요구 사항을 충족하도록 지원하는 전용 하드웨어 기반 암호화 키 스토리지 옵션이 포함됩니다. 
 또한 AWS는 암호화 및 데이터 보호 기능을 AWS 환경에서 개발하거나 배포하는 서비스와 통합하기 위한 API를 제공합니다. 
 AWS는 AWS 서비스 전반에서 사용자 액세스 정책을 정의하고 적용하며 관리할 수 있는 기능을 제공합니다. 
 여기에는 AWS 리소스에 대해 권한을 가진 개별 사용자 계정을 정의할 수 있는 Identity and Access Management(IAM) 기능, 하드웨어 기반 인증자용 옵션을 포함하여 권한 있는 계정에 대한 멀티 팩터 인증(MFA), 관리 비용을 줄이고 사용자 경험을 개선하기 위한 기업 디렉터리와의 통합 및 연동이 포함됩니다. 
 AWS는 많은 서비스 전반에 걸쳐 기본 Identity and Access Management(IAM) 통합 및 API와 애플리케이션 또는 서비스의 통합을 제공합니다. 
 AWS는 AWS 환경에서 발생하는 상황을 확인할 수 있는 도구와 기능을 제공합니다. 
 여기에는 호출을 수행한 대상, 호출 항목, 호출 위치를 비롯하여 API 호출에 대한 깊은 가시성, 로그 집계 및 옵션, 조사 및 규정 준수 보고 간소화, 특정 이벤트가 발생하거나 임계값을 초과할 경우의 알림 전송이 포함됩니다. 
 이러한 도구 및 기능을 사용하면 비즈니스에 영향을 미치기 전에 문제를 파악하기 위한 가시성을 확보할 수 있으며, 보안 상태를 개선하고 환경의 위험 프로파일을 줄일 수 있습니다. 
 AWS Marketplace는 맬웨어 방지, 웹 애플리케이션 방화벽, 침입 방지 등 기존 제어 및 온프레미스 환경과 동등하거나 동일하거나 이와 통합되는 업계 최고 수준의 파트너 제품 수백 개를 제공합니다. 
 이 제품은 기존 AWS 클라우드 서비스를 보완하여 고객이 종합적인 보안 아키텍처를 배포하고 클라우드 및 온프레미스 환경에서 보다 매끄러운 환경을 구축할 수 있도록 합니다. 
 고객 시스템과 데이터를 안전하게 지키는 것은 고객 신뢰와 자신감을 유지하는 것만큼이나 AWS에 최고로 중요한 사안입니다. 
 이 동영상에서는 AWS 환경의 제어 및 AWS가 보안 목표를 달성하기 위해 고객에게 제공하는 일부 제품 및 기능을 비롯한 AWS의 보안 접근 방식을 소개했습니다. 
 저는 Amazon Web Services(AWS) 교육 및 자격증 팀의 Jody Soeiro de Faria였습니다. 

- AWS 공동 책임 모델
안녕하세요. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Jody Soeiro de Faria입니다. 
오늘은 공동 책임 모델의 개요에 대해 살펴보겠습니다. 
고객이 AWS를 사용하기 시작하면 Amazon은 AWS의 데이터 보안에 대한 책임을 고객과 공유합니다. 
 이 개념을 클라우드 보안의 공동 책임 모델이라고 합니다. 
AWS와 고객이 이 모델에서 어떤 보안에 대한 책임을 지는지 자세히 알아보겠습니다. 
AWS에게는 클라우드 자체의 보안이라는 책임이 있습니다. 
 이 말이 무슨 뜻입니까?공동 책임 모델에서 AWS는 호스트 운영 체제 및 가상화 계층부터 서비스 운영 시설의 물리적인 보안에 이르기까지 구성 요소를 운영, 관리 및 제어합니다. 
 즉, AWS는 리전, 가용 영역 및 엣지 로케이션을 비롯하여 AWS 클라우드에 제공된 모든 서비스를 실행하는 글로벌 인프라를 보호할 책임이 있습니다. 
이 인프라를 보호하는 것은 AWS의 최우선 과제이며 고객이 AWS 데이터 센터나 사옥에 방문하여 직접 확인할 수는 없지만, Amazon은 각종 컴퓨터 보안 표준과 규정에 대한 준수 사실을 확인한 타사 감사자의 보고서를 제공합니다 AWS는 이 글로벌 인프라를 보호할 뿐만 아니라, 컴퓨팅, 스토리지, 데이터베이스 및 네트워킹을 포함하여 기본적인 서비스로 간주되는 해당 제품의 보안 구성도 책임집니다. 
 이러한 유형의 서비스에는 Amazon DynamoDB, Amazon RDS, Amazon Redshift, Amazon Elastic MapReduce, Amazon WorkSpaces 및 여러 다른 서비스가 포함됩니다. 
 이러한 서비스의 경우, 게스트 운영 체제(OS), 데이터베이스 패치, 방화벽 구성, 재해 복구 등의 기본 보안 작업을 AWS가 처리합니다. 
 고객은 안티바이러스 패치 작업, 유지 관리 또는 설치 등에 대해 걱정할 필요가 없습니다. 
 Amazon이 대신 처리하므로 고객은 실제로 플랫폼에 포함되는 대상에 집중할 수 있습니다. 
대부분의 이러한 관리형 서비스의 경우 고객은 리소스에 대한 논리적 액세스 제어를 구성하고 계정 자격 증명을 보호해야 합니다. 
 일부 관리형 서비스의 경우 데이터베이스 사용자 계정 설정과 같은 추가 작업이 필요할 수 있지만 전반적인 보안 구성 작업은 서비스에 의해 수행됩니다. 
 클라우드 인프라는 AWS에 의해 보안이 유지되고 유지 관리되지만 고객은 클라우드 내부에 배치하는 모든 요소의 보안에 대한 책임을 갖습니다. 
AWS 서비스를 이용하는 고객은 콘텐츠를 완벽하게 제어할 수 있으며, 다음과 같은 중요 콘텐츠 보안 요구 사항을 관리해야 할 책임이 있습니다. 
 · AWS에 저장하기로 결정한 콘텐츠 · 콘텐츠와 함께 사용되는 AWS 서비스 · 콘텐츠가 저장되는 국가 · 콘텐츠의 형식과 구조 및 마스크, 익명화 또는 암호화 여부 · 콘텐츠에 액세스할 수 있는 사용자 · 그러한 액세스 권한을 부여, 관리 및 취소하는 방법 고객은 데이터, 플랫폼, 애플리케이션, 자격 증명 및 액세스 관리, 운영 체제를 보호하기 위해 구현하기로 선택한 보안 기능을 제어할 수 있습니다. 
 즉, 공동 책임 모델은 고객이 사용하는 AWS 서비스에 따라 본질적으로 달라집니다. 
또한 AWS Service Catalog를 사용하여 가상 머신 이미지, 서버, 소프트웨어, 데이터베이스 등 AWS에서 사용하도록 승인한 IT 서비스의 카탈로그를 만들고 관리하여 다층적 애플리케이션 아키텍처를 완성할 수 있습니다. 
 AWS Service Catalog는 고객이 공통적으로 배포되는 IT 서비스를 중앙에서 관리하도록 하며, 사용자가 자신에게 필요한 승인된 IT 서비스만 빠르게 배포하도록 하는 한편, 고객이 일관된 거버넌스를 달성하고 규정 준수 요구 사항을 충족하도록 도와줍니다. 
AWS 공동 책임 모델을 시각화하기 위해 예제를 살펴보겠습니다. 
고객이 스토리지에 Amazon S3를 사용하고 데스크톱 및 애플리케이션 스트리밍에 Amazon Workspaces를 사용한다고 가정해 보겠습니다. 
 또한 EC2 인스턴스와 Oracle DB 인스턴스로 구성된 Virtual Private Cloud(VPC)도 있습니다. 
AWS는 AWS 클라우드의 모든 서비스를 실행하는 글로벌 인프라를 보호할 책임이 있습니다. 
 AWS 글로벌 인프라는 보안 모범 사례를 비롯한 다양한 보안 규정 준수 표준에 따라 설계 및 관리됩니다. 
 Amazon EC2 및 Amazon VPC처럼 IaaS(서비스로서의 인프라) 범주에 해당하는 AWS 제품은 고객이 전적으로 제어할 수 있으며 필요한 모든 보안 구성과 관리 작업을 직접 수행해야 합니다. 
 예를 들어 EC2 인스턴스의 경우 고객은 게스트 OS(업데이트 및 보안 패치 포함)를 비롯하여 인스턴스에 설치한 모든 애플리케이션 소프트웨어나 유틸리티의 관리, 그리고 각 인스턴스에 대해 AWS에서 제공한 방화벽(보안 그룹이라고 부름)의 구성을 책임져야 합니다. 
 이는 서버 위치와 상관없이 기존에 수행했던 보안 작업과 기본적으로 동일합니다. 
 오늘 우리가 학습한 내용을 빠르게 복습해 보겠습니다. 
· 공동 책임 모델은 AWS 및 클라우드상의 데이터 보안을 위해 협력하는 고객으로 구성됩니다. 
 · AWS는 클라우드 자체의 보안에 대해 책임을 지는 반면 고객은 클라우드 내부의 보안에 대해 책임을 집니다. 
  · 고객은 사용하고 있는 AWS 서비스에 따라 구현하기로 선택한 보안에 대한 제어권을 갖습니다. 
· 고객은 AWS Service Catalog를 사용하여 AWS에서 사용하도록 승인한 IT 서비스의 카탈로그를 만들고 관리할 수 있습니다. 
· Amazon EC2 및 Amazon VPC처럼 IaaS 범주에 해당하는 AWS 제품은 고객이 전적으로 제어할 수 있으며 필요한 모든 보안 구성과 관리 작업을 직접 수행해야 합니다. 
저는 Amazon Web Services(AWS) 교육 및 자격증 팀의 Jody Soeiro de Faria였습니다. 

- AWS 엑세스 제어 및 관리
안녕하세요. 
 Amazon Web Services 교육 및 자격증 팀의 Jody Soeiro de Faria입니다. 
오늘은 액세스 제어 및 관리의 개요에 대해 살펴보겠습니다. 
AWS Identity and Access Management(IAM)는 AWS 리소스에 대한 고객 및 사용자의 액세스를 안전하게 제어하는 웹 서비스입니다. 
 고객은 IAM을 사용하여 AWS 리소스를 사용할 수 있는 사용자(인증이라고 함)와 리소스를 사용하는 방법(권한 부여라고 함)을 제어합니다. 
AWS Identity and Access Management(IAM)를 사용하면 AWS 클라우드에서 컴퓨팅, 스토리지, 데이터베이스 및 애플리케이션 서비스에 대한 액세스를 관리할 수 있습니다. 
사용자(최종 사용자), 그룹(작업 기능별 사용자 모음), 권한(사용자 또는 그룹에 적용 가능), 역할(신뢰할 수 있는 엔터티) 등 이미 익숙한 액세스 제어 개념에 대해 생각해 보십시오. 
 IAM은 바로 이러한 기능을 사용하여 아주 강력한 성능을 갖게 됩니다!또한, AWS 사용자 및 그룹을 만들고 관리하며 AWS 리소스에 대한 액세스를 허용 및 거부할 수 있습니다. 
 IAM의 기능을 자세히 살펴보겠습니다. 
AWS IAM으로 다음을 수행할 수 있습니다. 
· IAM 사용자 및 액세스 관리 - IAM에서 사용자를 생성하거나, 사용자에게 개별 보안 자격 증명(즉, 액세스 키, 암호, 멀티 팩터 인증 디바이스)을 할당하거나, AWS 서비스 및 리소스에 대한 액세스를 제공할 수 있도록 임시 보안 자격 증명을 요청할 수 있습니다. 
 사용자가 수행할 수 있는 작업을 제어하기 위해 권한을 관리할 수 있습니다. 
· IAM 역할 및 해당 권한 관리 - IAM에서 역할을 생성하고, 역할을 가정하는 엔터티 또는 AWS 서비스로 수행할 수 있는 작업을 제어하는 권한을 관리할 수 있습니다. 
 · 연합된 사용자 및 해당 권한 관리 - 자격 증명 연동을 사용하면 자격 증명별로 IAM 사용자를 생성하지 않고도, 기업 디렉터리에서 기존 자격 증명(사용자, 그룹 및 역할)으로 AWS Management Console에 액세스하고 AWS API를 호출하며 리소스에 액세스할 수 있습니다. 
 Amazon Web Services(AWS) 계정을 처음 생성하면 계정의 모든 AWS 서비스 및 리소스에 대한 완전한 액세스 권한을 가진 단일 로그인 자격 증명으로 시작하게 됩니다. 
 이 자격 증명은 AWS 계정 루트 사용자라고 하며, 계정을 생성할 때 사용한 이메일 주소와 암호로 로그인하여 액세스할 수 있습니다. 
루트 사용자 자격 증명에 대한 권한을 제한할 수 없으므로 루트 사용자 액세스 키를 삭제하는 것이 좋습니다. 
 관리자 수준의 권한이 필요한 경우, IAM 사용자를 생성하고, 이 사용자에게 전체 관리자 액세스 권한을 부여한 다음, 해당 자격 증명을 사용하여 AWS와 상호 작용할 수 있습니다. 
 권한을 취소하거나 수정해야 하는 경우, 해당 IAM 사용자에게 연결된 정책을 삭제하거나 수정할 수 있습니다. 
사용자를 추가할 때, 고객은 사용자가 AWS에 액세스하는 방법을 선택해야 합니다. 
 사용자에게 할당할 수 있는 액세스 유형에는 프로그래밍 방식 액세스 및 AWS Management Console 액세스라는 두 가지 액세스 방식이 있습니다. 
프로그래밍 방식 액세스는 AWS API, CLI, SDK 및 기타 개발 도구에 대한 액세스 키 ID 및 보안 액세스 키를 활성화합니다. 
 다른 옵션은 사용자가 AWS Management Console에 로그인할 수 있도록 허용하는 AWS Management Console 액세스 권한을 사용자에게 부여하는 것입니다. 
 AWS Management Console에서는 간편한 웹 인터페이스를 통해 Amazon Web Services를 사용할 수 있습니다. 
 AWS 계정 이름과 암호를 사용해 로그인할 수 있습니다. 
 AWS Multi-Factor Authentication이 활성화된 경우 디바이스의 인증 코드를 입력하라는 메시지가 표시됩니다. 
사용자는 인증을 받은 후 AWS 서비스에 액세스할 수 있는 권한을 부여받아야 합니다. 
사용자, 그룹 또는 역할에 권한을 할당하려면 권한을 명시적으로 나열하는 문서인 정책을 생성해야 합니다. 
 하나의 정책을 IAM 사용자, IAM 그룹 및 IAM 역할에 할당할 수 있습니다. 
IAM의 기본 개념에 대해 살펴보았으므로 이제 AWS Management Console에 로그인하여 사용자를 생성하고, 사용자를 그룹에 할당하고, 권한을 적용해 보겠습니다. 
로그인을 하면 홈페이지가 표시됩니다. 
 서비스를 위한 콘솔을 열려면 검색 상자에 서비스 이름을 입력한 다음 검색 결과 목록에서 원하는 서비스를 선택합니다. 
  예를 들어, EC2를 입력하면 EC2, EC2 Container Service 및 Elastic File System을 확인할 수 있습니다. 
 검색을 사용하지 않으려는 경우 서비스를 클릭하여 알파벳순으로 정렬된 전체 서비스 목록을 열 수 있습니다. 
 IAM을 찾아 클릭해 봅니다. 
IAM에 들어가면 IAM 사용자 로그인 링크가 표시됩니다. 
  웹 주소에 임의의 숫자가 있다는 점을 알 수 있습니다. 
 이 링크를 쉽게 사용자 지정할 수 있습니다. 
페이지 아래로 이동하면 Security Status 섹션이 표시됩니다. 
계속 진행하여 2명의 사용자를 생성해 봅니다. 
 첫 번째 사용자는 John Doe이고, 두 번째 사용자는 Bob Fields입니다. 
그 다음, 프로그래밍 방식과 AWS Management Console 액세스를 모두 제공할 수 있습니다. 
 또한 두 사용자의 콘솔 암호를 자동으로 생성하거나 할당하려는 경우 선택할 수도 있습니다. 
 password1이라는 암호를 할당해 보겠습니다. 
 또한 사용자가 로그인할 때 암호 재설정을 요구할 수도 있습니다. 
[Next: Permissions]를 클릭합니다. 
 이제 사용자를 그룹에 배치하거나 기존 사용자의 권한을 복사하거나 기존 정책을 직접 연결할 수 있는 옵션을 갖습니다. 
 앞서 언급했듯이 그룹을 생성하고 그룹에 연결할 정책을 선택할 수 있습니다. 
 그룹을 사용하는 것은 작업 기능별로 사용자의 권한을 관리할 수 있는 모범 사례입니다. 
 예를 들어 시스템 관리자, 재무 부서, 개발자 등을 위한 그룹을 생성할 수 있습니다. 
계속 진행하여 시스템 관리 그룹을 생성해 봅니다. 
 [Create Group]을 클릭합니다. 
[Group Name]에 system-admins를 입력합니다. 
그 다음, 이 그룹에 정책을 적용할 수 있습니다. 
 정책은 단순히 사용자 또는 그룹에 연결할 수 있는 권한의 문서라는 점을 기억해야 합니다. 
 그러면 시스템 관리자에게 부여하려는 권한은 무엇입니까? 시스템 관리자가 애플리케이션을 관리할 수 있게 해 보겠습니다. 
 Policy Type: admin을 검색해 봅니다. 
다양한 관리자 정책에 대한 수많은 결과를 확인할 수 있습니다. 
AdministratorAccess라는 상자를 확인해 봅니다. 
 그런 다음 [Create Group]을 클릭합니다. 
 [Next: Review]를 클릭합니다. 
 이제 방금 생성한 사용자 세부 정보 및 권한 요약을 검토할 수 있습니다. 
 좋습니다. 
 이제 [Create Users]를 클릭합니다. 
앞서 생성한 John과 Bob이 표시됩니다. 
 또한 이 사용자와 관련하여 생성된 다른 항목도 있습니다. 
  액세스 키 ID와 보안 액세스 키입니다. 
사용자가 명령줄 인터페이스 또는 SDK 및 API를 사용하여 프로그래밍 방식으로 AWS와 상호 작용하게 하려면 액세스 키 ID와 보안 액세스 키가 모두 필요합니다. 
 하지만 AWS Management Console에 로그인하려면 여기에 나열된 사용자 이름과 암호를 사용하게 됩니다. 
마지막 단계는 . 
CSV 파일을 다운로드하여 생성한 사용자의 기록과 로그인 정보를 얻는 것입니다. 
 이제 [Close]를 클릭합니다. 
이제 사용자인 John과 Bob이 한 그룹에 속해 있다는 점을 확인할 수 있습니다. 
 마지막으로 로그인한 시간도 확인할 수 있습니다. 
 또한 왼쪽의 탐색 항목을 사용하여 사용자를 분석하고, 정책을 추가하고, 정책을 분리하고, 그룹을 변경하고, 기타 사용자 관리 기능을 사용할 수 있습니다. 
계속 진행하여 대시보드로 돌아갑니다. 
 Security Status 확인 목록에 다음 몇 가지 항목이 확인된 상태에 있는 것을 볼 수 있습니다. 
· 개별 IAM 사용자를 생성했음. 
· 그룹을 사용하여 권한을 할당했음· 이제 마지막으로 해야 할 일은 IAM 암호 정책을 적용하는 것입니다. 
 계속 진행하여 [Apply an IAM password policy]를 클릭한 다음 [Manage Password Policy]를 클릭합니다. 
 이 화면에서 IAM 사용자가 설정할 수 있는 암호 유형을 정의하는 기본 규칙을 설정합니다. 
 비즈니스에 가장 적합한 특정 암호 정책을 설정했으면 계속 진행하여 [Apply password policy]를 클릭합니다. 
마지막으로, AWS Management Console에서 역할에 대한 작업을 수행해야 합니다. 
 계속 진행하여 왼쪽 탐색 창에서 [Roles]를 클릭합니다. 
 역할은 신뢰하는 엔터티에 권한을 부여하는 안전한 방식이라는 점을 기억하세요. 
 계속 진행하여 [Create Role]을 클릭합니다. 
AWS 서비스, 다른 AWS 계정, 웹 자격 증명 또는 SAML 2.0 연동과 같은 역할 유형을 클릭하여 선택할 수 있습니다. 
계속 진행하여 [AWS Service]를 클릭합니다. 
 S3에 파일을 저장하기 위해 EC2에 액세스해야 하는 역할을 생성하고 있다고 가정해 보겠습니다. 
  이제 사용 사례를 선택합니다. 
 첫 번째 옵션 “Allows EC2 instances to call AWS services on your behalf”를 선택합니다. 
 [Next: Permissions]를 클릭합니다. 
 이전에 그룹에서 확인했던 것처럼 권한을 할당하는 정책을 적용할 수 있습니다. 
EC2 인스턴스에 액세스하여 S3에 파일을 저장할 수 있으므로 이제 계속 진행하여 S3 정책을 검색해 보겠습니다. 
 [AmazonS3Full Access]를 클릭합니다. 
 [Next]를 클릭합니다. 
[Role Name]에 S3-Access를 추가합니다. 
  설명을 추가할 수도 있지만 필수는 아닙니다. 
 [Create Role]을 클릭합니다. 
  좋습니다. 
 새로운 역할이 생성되었습니다!  이 역할을 EC2 인스턴스에 적용할 수 있으며, 연결된 정책에 따라 S3 권한이 부여됩니다. 
축하합니다! 사용자를 생성했고, 그룹에 할당했으며, 작업 기능을 기반으로 권한을 적용했습니다. 
  또한 IAM 암호 정책을 적용하여 Security Status 확인 목록 항목을 완료했습니다. 
 그런 다음 역할을 생성했습니다. 
 이것이 신뢰할 수 있는 엔터티에 권한을 부여하는 멋진 방법이라는 점도 확인했습니다. 
 훌륭하네요!IAM의 작동 방식을 살펴보았으므로 이제 IAM의 모범 사례에 대해 알아보겠습니다. 
· AWS 루트 계정 액세스 키는 AWS 리소스에 대한 무제한 액세스를 제공하므로 삭제. 
 대신, IAM 사용자 액세스 키 또는 임시 보안 자격 증명 사용. 
· AWS 루트 계정에 대해 멀티 팩터 인증(MFA)을 활성화하여 계정을 안전하게 유지할 수 있는 또 다른 보호 계층 추가. 
 · IAM 사용자를 생성하고 필요한 권한만 부여. 
 AWS 루트 계정은 AWS 리소스에 대한 무제한 액세스를 제공하므로 AWS와의 일상적인 상호 작용에 해당 루트 계정을 사용하지 않음. 
 · IAM 그룹을 사용하여 IAM 사용자에게 권한을 할당해서 계정의 권한을 간편하게 관리하고 감사할 수 있음. 
· IAM 암호 정책을 적용하여 IAM 사용자가 강력한 암호를 만들고 암호를 정기적으로 교체하도록 요구. 
· Amazon EC2 인스턴스에서 실행되는 애플리케이션에 역할 사용· 자격 증명을 공유하기보다는 역할을 사용하여 위임· 자격 증명을 주기적으로 교체· 불필요한 사용자와 자격 증명을 제거· 보안 강화를 위해 정책 조건 사용· AWS 계정 내 활동을 모니터링요약하자면, AWS Identity and Access Management는 고객과 사용자가 AWS 리소스에 대한 액세스를 안전하게 제어할 수 있도록 도와줍니다. 
  이를 통해 액세스, 역할 및 권한과 더불어 사용자 및 그룹을 관리할 수 있습니다. 
저는 Amazon Web Services(AWS) 교육 및 자격증 팀의 Jody Soeiro de Faria였습니다.

- AWS 보안 규정 준수 프로그램
안녕하세요. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Jody Soeiro de Faria입니다. 
 오늘은 보안 규정 준수 프로그램의 개요에 대해 살펴보겠습니다. 
Amazon의 모든 것과 마찬가지로, AWS 보안 및 규정 준수 프로그램의 성공 여부를 가늠하는 단 한 가지 기본 조건은 바로 고객의 성공입니다. 
 AWS 고객들은 안전하고 규정에 맞는 클라우드 환경을 운영할 수 있도록 마련된 AWS의 규정 준수 보고서, 증명, 인증 포트폴리오를 믿고 따릅니다. 
 이러한 노력을 발판 삼아 AWS가 제공하는 확장성과 비용 절감 효과를 달성하는 동시에 강력한 보안 및 규정 준수의 혜택을 얻을 수 있습니다. 
 이 모듈에서는 다음 주제에 대해 살펴보겠습니다. 
· 보증 프로그램을 비롯한 AWS의 규정 준수 접근 방식. 
· 위험 관리, 제어 환경 및 정보 보안과 같은 AWS 위험 및 규정 준수 프로그램. 
· 마지막으로, AWS 고객 규정 준수 책임. 
공동 보안 책임 모델에서 AWS와 고객은 IT 환경의 보안을 함께 제어합니다. 
 이 공동 책임 모델에서 AWS가 책임져야 할 부분에는 매우 안전하게 관리되는 플랫폼에서 서비스를 제공하고 고객이 사용할 수 있는 다양한 보안 기능을 지원하는 일이 포함됩니다. 
 고객사의 책임에는 목적에 맞춰 안전하고 통제된 방식으로 IT 환경을 조성하는 것이 포함됩니다. 
 고객이 IT 환경의 용도와 구성을 AWS에 알리지 않더라도 AWS는 고객과 관련된 보안 및 제어 환경을 알려야 합니다. 
 AWS는 다음과 같은 방법으로 이를 수행합니다. 
 · 산업 인증 및 독립적인 타사 인증 획득· 백서와 웹 사이트 콘텐츠를 통한 AWS 보안 및 규제 관행에 대한 정보 공개
 · NDA 체결 후, AWS 고객사에 인증서, 보고서 및 기타 문서 직접 제공(필요한 경우). 
AWS는 외부 인증 기관 및 독립 감사자와 협력하여 고객에게 AWS에서 확립 및 운영하는 정책, 프로세스 및 컨트롤에 대한 다양한 정보를 제공합니다. 
 규정 준수 인증 및 검증은 독립적인 외부 감사 기관이 평가하여 인증, 감사 보고 또는 규정 준수 검증의 형태로 결과가 나타납니다. 
 AWS 고객은 관련 규정 준수 법률 및 규정을 준수할 책임이 있습니다. 
 경우에 따라 AWS는 고객 규정 준수를 지원하기 위한 기능(보안 기능 등), 인에이블러 및 법률 계약(AWS 데이터 처리 계약 및 비즈니스 제유 계약 등)을 제공합니다. 
 규정 준수 편성 및 프레임워크에는 특정 업계나 기능과 같은 특정 목적으로 게시된 보안 또는 규정 준수 요건이 포함됩니다. 
 AWS는 이러한 유형의 프로그램에 맞는 기능(보안 기능 등)과 인에이블러(규정 준수 플레이북, 매핑 문서 및 백서 포함)를 제공합니다. 
AWS는 고객이 거버넌스 프레임워크에 AWS 컨트롤을 통합할 수 있는 위험 및 규정 준수 프로그램에 대한 정보를 제공합니다. 
 이 정보는 고객이 프레임워크의 중요한 부분으로 포함된 AWS를 통해 전체 제어 및 거버넌스 프레임워크를 문서화할 수 있도록 지원합니다. 
 AWS 위험 및 규정 준수 프로그램은 다음 세 가지 요소로 구성됩니다. 
· 위험 관리
· 제어 환경
· 정보 보안

각 AWS 위험 및 규정 준수 프로그램을 자세히 살펴보겠습니다. 
 AWS 관리 팀은 위험 식별과 위험을 완화 또는 관리할 수 있는 컨트롤 구현을 포함하는 전략적 비즈니스 계획을 개발했습니다. 
 AWS 관리 팀은 최소한 1년에 두 번 이상 전략적 비즈니스 계획을 재평가합니다. 
 이 프로세스에서는 관리 팀이 책임 영역 내의 위험을 식별하고 그러한 위험을 해결할 수 있도록 고안된 적절한 대책을 구현해야 합니다. 
또한 AWS 제어 환경은 다양한 내부 및 외부 위험 평가를 거칩니다. 
 AWS 규정 준수 및 보안 팀은 다음 관리 기관을 기반으로 정보 보안 프레임워크 및 정책을 구축했습니다. 
· COBIT(Control Objectives for Information and related Technology)
· AICPA(미국 공인회계사 협회)
· NIST(National Institute of Standards and Technology)

AWS는 보안 정책을 유지하고, 직원에게 보안 교육을 제공하며, 애플리케이션 보안 검토를 수행합니다. 
 이러한 검토는 정보 보안(IS) 정책에 대한 일치성뿐 아니라 데이터의 기밀성, 무결성 및 가용성도 평가합니다. 
AWS 보안 팀은 정기적으로 모든 인터넷 연결 서비스 엔드포인트 IP 주소를 검사하여 취약성이 있는지 확인합니다(검사는 고객 Amazon EC2 인스턴스 인터페이스에서 수행되지 않음). 
 AWS 보안 팀은 확인된 취약성을 해결하기 위해 해당 당사자에게 취약성을 알립니다. 
 또한 독립적인 보안 회사에서 정기적으로 외부 취약성 위협 평가를 수행합니다. 
 이러한 평가 결과 확인된 내용과 권장사항이 범주화되어 AWS 책임자에게 전달됩니다. 
 이러한 검사는 기본 AWS 인프라의 상태와 실현가능성을 확인하는 방식으로 수행되며, 특정 규정 준수 요건을 충족하는 데 필요한 고객의 자체 취약성 검사를 대체하기 위한 의도로 제공되지 않습니다. 
 고객은 검사가 고객의 인스턴스에 국한되고 AWS Acceptable Use Policy를 위반하지 않는 범위에서 클라우드 인프라 검사를 수행할 수 있는 권한을 요청할 수 있습니다. 
AWS는 Amazon의 전체 통제 환경의 다양한 측면을 활용하는 정책, 프로세스 및 통제 활동을 포함하는 포괄적인 통제 환경을 관리합니다. 
 이 통제 환경은 AWS 서비스 제품군을 안전하게 제공하기 위해 마련되었습니다. 
 이 집합적인 제어 환경은 AWS 제어 프레임워크의 운영 효과를 지원하는 환경을 구성 및 관리하는 데 필요한 인력, 프로세스 및 기술을 포괄합니다. 
 AWS는 선도적인 클라우드 컴퓨팅 산업 기관에서 확인한 적용 가능한 클라우드 관련 컨트롤을 AWS 제어 프레임워크에 통합했습니다. 
 AWS는 고객이 제어 환경을 관리할 수 있도록 더 효과적으로 지원하는 주요 사례를 구현하기 위해 이러한 산업 그룹을 지속적으로 모니터링합니다. 
AWS는 고객 시스템 및 데이터의 기밀성, 무결성, 가용성을 보호할 수 있도록 고안된 공식적인 정보 보안 프로그램을 구현했습니다. 
 AWS는 공개 웹 사이트에 AWS에서 고객이 데이터를 안전하게 보호하도록 도울 수 있는 방법을 설명한 보안 백서를 게시합니다. 
온라인에서 사용 가능한 리소스를 살펴보겠습니다. 

 먼저 http://aws.amazon.com/compliance로 이동합니다. 
  지금까지 다루고 있는 주제에 대한 리소스를 찾을 수 있는 곳입니다. 
 예를 들어, 보증 프로그램을 클릭하면 사용 가능한 프로그램 목록이 표시됩니다. 
 미국을 클릭한 다음 아래로 스크롤하여 HIPAA를 클릭해 봅니다. 
  페이지를 스크롤하면서 많은 정보를 발견할 수 있습니다. 
  페이지 상단에는 HIPAA에 초점을 맞춘 백서가 있습니다. 
  해당 백서를 클릭하면 자세한 pdf 파일이 표시됩니다. 
  이 문서의 제목은 “Architecting for HIPAA Security and Compliance on Amazon Web Services”입니다. 
  목차로 스크롤하면 여기에 나열된 모든 서비스에 대한 HIPAA 규정 준수 정보를 확인할 수 있습니다. 
 이 사이트에 나열된 다른 보증 프로그램에 대해서도 유사한 정보를 찾을 수 있습니다. 
늘 그렇듯이 AWS 고객은 IT 배포 방식에 관계 없이 전체 IT 제어 환경에 대한 적절한 거버넌스를 지속적으로 유지해야 합니다. 
 주요 모범 사례: 
 · 필요한 규정 준수 목표 및 요건 이해(관련 소스에서)
 · 그러한 목표와 요건을 충족하는 제어 환경 구성
 · 조직의 위험 허용 범위에 따라 필요한 확인 이해
 · 제어 환경의 운영 효과 검증 
 
 AWS 클라우드 기반 배포를 통해 대기업에게 다양한 유형의 컨트롤과 다양한 검증 방법을 적용할 수 있는 여러 옵션을 제공할 수 있습니다. 
 강력한 고객 규정 준수와 거버넌스에는 다음과 같은 기본 접근법이 포함될 수 있습니다. 
 · AWS에서 제공하는 정보와 기타 정보를 함께 검토하여 전체 IT 환경을 최대한 이해한 다음 모든 규정 준수 요건을 문서화합니다. 
· 대기업의 규정 준수 요건을 충족하는 제어 목표를 수립하고 구현합니다. 
· 외부 당사자가 소유한 제어 기능을 식별하고 문서화합니다. 
· 모든 제어 목표가 달성되었으며 모든 주요 제어 환경이 효과적으로 설계 및 운영되고 있는지 확인합니다. 
고객은 AWS를 통해 규정 준수 및 거버넌스 프로세스에 참여하여 규정 준수 요건을 충족할 수 있습니다. 
Amazon Web Services 클라우드 규정 준수를 통해 고객은 클라우드에서 보안 및 데이터 보호를 유지하기 위한 AWS의 강력한 통제 환경을 이해할 수 있습니다. 
 시스템이 AWS 클라우드 인프라를 기반으로 구축되기 때문에 규정 준수와 관련된 책임이 공유됩니다. 
 AWS 규정 준수 프로그램은 기존 프로그램을 기반으로 하며, 고객이 AWS 보안 제어 환경을 구성하고 운영할 수 있도록 지원합니다. 
 저는 Amazon Web Services(AWS) 교육 및 자격증 팀의 Jody Soeiro de Faria였습니다. 
 
 
 - AWS 보안 리소스
 안녕하세요. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Jody Soeiro de Faria입니다. 
 오늘은 보안 리소스의 개요에 대해 살펴보겠습니다. 
 앞서 언급했듯이 AWS는 다음을 수행하여 고객에게 보안 및 제어 환경을 전달합니다. 
· 산업 인증 및 독립적인 타사 증명
· 백서와 웹 콘텐츠를 통한 AWS 보안 및 규제 관행에 대한 정보
· NDA에 따라 AWS 고객에게 인증서, 보고서 및 기타 문서 직접 제공

AWS가 온라인 도구, 리소스, 지원 및 전문 서비스를 통해 클라우드에서 데이터를 보호할 수 있도록 고객에게 지침 및 전문 지식을 제공하는 방법에 대해 자세히 살펴보겠습니다. 
AWS Trusted Advisor는 맞춤형 클라우드 전문가와 같은 역할을 하는 온라인 도구로서, 모범 사례를 따르도록 리소스를 구성할 수 있도록 도와줍니다. 
 Trusted Advisor는 AWS 환경을 검사하여 보안 격차를 줄이고, 비용 절감과 시스템 성능 개선, 안정성 향상에 대한 기회를 찾을 수 있도록 도와줍니다. 
AWS 계정 팀은 첫 번째 연락 지점을 제공하여 배포 및 구현 과정을 안내하고, 직면할 수 있는 보안 문제를 해결하기 위한 최적의 리소스를 알려 줍니다. 
AWS Enterprise Support는 15분의 응답 시간을 제공하며 전담 기술 계정 관리자와 함께 전화, 채팅 또는 이메일을 통해 24시간 연중 무휴로 이용할 수 있습니다. 
 이 컨시어지 서비스는 고객의 문제가 최대한 신속하게 처리되도록 보장합니다. 
AWS Professional Services 및 AWS 파트너 네트워크는 충분히 검증된 설계를 기반으로 보안 정책과 절차를 개발하고 고객의 보안 설계가 내부 및 외부 규정 준수 요건을 충족하도록 지원합니다. 
 AWS 파트너 네트워크는 고객의 보안 및 규정 준수 요구 사항을 지원하는 전 세계 수백 개의 인증된 AWS 컨설팅 파트너를 보유하고 있습니다. 
AWS 자문 및 게시판을 통해 AWS는 현재의 취약성 및 위협에 대한 자문을 제공하고 고객이 AWS 보안 전문가와 협력하여 침해 사례, 취약성 및 침투 테스트와 같은 문제를 해결할 수 있게 합니다. 
감사, 규정 준수 또는 법적 역할을 수행하는 경우 AWS 감사자 학습 과정을 확인하면 내부 작업이 AWS 플랫폼을 사용하여 규정 준수를 증명할 수 있는 방법을 더 깊이 이해할 수 있습니다. 
 규정 준수 웹 사이트의 추천 교육, 자습형 실습 및 리소스 감사에 액세스할 수 있습니다. 
규정 준수를 어디부터 시작해야 할지 잘 모르거나 자주 사용하는 리소스 및 프로세스에 액세스해야 하는 경우 AWS 규정 준수 솔루션 안내서를 확인합니다. 
 공동 책임 모델 이해, 규정 준수 보고서 요청 또는 보안 질문서 작성과 같은 사용 가능한 규정 준수 솔루션에 대해 알아보십시오. 
기타 유용한 규정 준수 리소스에는 다음 항목이 포함됩니다. 
· 범위 내 서비스. 
 이 페이지는 현재 범위 내에 있는 서비스와 진행 중인 서비스에 대한 세부 정보를 보여 줍니다. 
· AWS 보안 블로그는 AWS 보안 프로그램의 최신 업데이트를 모두 추적할 수 있는 좋은 방법입니다. 
· 사례 연구는 보안과 관련된 AWS의 현재 고객 경험에 대한 통찰력 있는 정보를 제공합니다. 
· PCI, HIPAA, SOC, FedRAMP와 같은 특정 규정 준수 유형에 대한 자주 묻는 질문의 답변을 얻을 수도 있습니다. 
AWS 보안에 대한 자세한 내용을 확인하려면 <http://AWS.amazon.com/security>에 방문하십시오. 
 리소스 탭에서는 개발자 문서, 백서, 도움말 및 자습서로 연결되는 링크 및 AWS 보안 환경을 사용자 지정하는 제품을 검색할 수 있는 AWS Marketplace의 링크를 찾을 수 있습니다. 
 또한 http://AWS.amazon.com/compliance를 방문하고 리소스 탭을 클릭하여 규정 준수 리소스에 대한 자세한 내용을 확인할 수도 있습니다. 
 이 페이지의 리소스는 필수, 워크북, 개인 정보 보호 정책, 정부, 가이드 및 모범 사례 범주로 구성되어 있습니다. 
  추가 정보가 필요한 경우 프로덕션 환경을 구성하기 전에 지식을 적용하는 데 도움이 되는 라이브 또는 가상 강좌의 자습형 실습, 강의식 교육 학습 방식의 AWS 교육을 언제든지 수강할 수 있습니다. 
복습하자면, AWS는 AWS 클라우드에서 데이터의 보안을 보장하기 위해 여러 가지 서비스, 도구, 제품 및 리소스를 제공합니다. 
 http://aws.amazon.com/security 또는 http://aws.amazon.com/compliance를 방문하여 더 많은 리소스를 확인할 수 있습니다. 
 저는 Amazon Web Services(AWS) 교육 및 자격증 팀의 Jody Soeiro de Faria였습니다. 
 

 

 


- Well Architected 프레임워크 소개
안녕하세요. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Anna Fox입니다. 
 오늘 동영상에서는 Well-Architected 프레임워크에 대해 간략히 소개하겠습니다. 
 AWS Well-Architected 프레임워크는 고객을 돕기 위한 목적입니다. 
 고객이 고유의 아키텍처를 평가 및 개선하고 설계에 대한 의사결정이 비즈니스에 미치는 영향을 더 확실히 파악할 수 있도록 돕기 위한 목적입니다. 
 AWS 전문가는 고객이 아키텍처를 분석하고 비평적으로 생각하는 데 도움이 되는 여러 가지 질문을 개발했습니다. 
 인프라가 모범 사례를 따르는지 확인하는 데 도움을 줍니다. 
 이를 통해 AWS는 다섯 가지 관점 또는 핵심 요소로부터 아키텍처를 설계하는 데 도움이 되는 안내서를 개발했습니다. 
 이러한 핵심 요소는 보안, 안정성, 성능 효율성, 비용 최적화 및 운영 우수성입니다. 
 이 동영상 전체에 걸쳐 각 핵심 요소를 자세히 살펴볼 것입니다. 
 또한 각 핵심 요소의 설계 원칙에 대해서도 논의해 보겠습니다. 
 이제 보안 핵심 요소를 알아보겠습니다. 
 보안 핵심 요소에는 위험 평가 및 완화 전략을 통해 정보, 시스템 및 자산을 보호하는 동시에 비즈니스 가치를 제공하는 능력이 포함됩니다. 
 여기에서 좀 더 자세히 살펴보자면, 클라우드 보안은 다섯 가지 영역으로 구성되어 있습니다. 
 각 영역에 대해 간략히 알아보겠습니다. 
 · 먼저 AWS Identity and Access Management(IAM)가 있습니다. 
 이는 고객이 의도한 방식에 따라 권한이 부여되고 인증된 사용자만 리소스에 액세스할 수 있도록 보장하는 데 매우 중요한 역할을 합니다. 
 · 그리고 탐지 제어도 있습니다. 
 탐지 제어는 로그를 캡처하거나 분석하고 감사 제어를 통합하는 것과 같은 접근 방식을 고려하여 잠재적인 보안 사고를 식별하는 데 사용할 수 있습니다. 
 · 그 다음, 인프라 보호도 있습니다. 
 이를 통해 아키텍처 내의 시스템과 서비스가 의도치 않은 무단 액세스로부터 보호됩니다. 
 예를 들어 사용자는 네트워크 경계, 강화 및 패치 작업, 사용자/키/액세스 수준, 애플리케이션 방화벽 또는 게이트웨이를 생성할 수 있습니다. 
 · 그리고 데이터 보호가 있습니다. 
 데이터 보호의 경우 고려해야 할 많은 접근 방식과 방법이 있습니다. 
 여기에는 데이터 분류, 암호화, 보관된 데이터 및 전송 중인 데이터 보호, 데이터 백업, 그리고 필요할 때의 복제 및 복구가 포함됩니다. 
 · 마지막으로, 사고 대응이 있습니다. 
 모든 예방 및 탐지 방법을 사용하더라도 조직은 잠재적 보안 사고에 대응하고 그 영향을 완화하기 위한 사고 대응 프로세스를 마련해야 합니다. 
 사고 대응은 적절한 시기에 복구 작업을 수행할 수 있도록 아키텍처가 업데이트되도록 보장합니다. 
 설계할 때에는 보안을 강화하는 데 도움이 되는 이러한 설계 원칙을 고려하는 것이 중요합니다. 
 AWS의 첫 번째 설계 원칙은 모든 계층에 보안을 적용하는 것입니다. 
 고객은 모든 장소와 모든 계층의 인프라를 보호하고자 합니다. 
 물리적 데이터 센터에서 보안은 일반적으로 주변에 대해서만 고려됩니다. 
 AWS를 사용하면 주변뿐만 아니라 리소스 내와 리소스 간에 보안을 구현할 수 있습니다. 
 이를 통해 개별 환경과 구성 요소가 서로 안전하게 보호됩니다. 
 그런 다음 AWS는 추적 기능을 활성화하고자 합니다. 
 고객은 환경에 대한 모든 작업 또는 변경 사항을 기록하고 감사하여 이를 수행할 수 있습니다. 
 또 다른 유용한 설계 원칙은 최소 권한의 원칙을 구현하는 것입니다. 
 기본적으로 고객은 환경의 권한 부여가 적절한지, AWS 리소스에 대한 강력한 논리적 액세스 제어를 구현하고 있는지 확인하고자 합니다. 
 다음으로, AWS는 고객이 시스템 보안에 집중하도록 보장하고 싶습니다. 
 AWS 공동 책임 모델을 사용하면 AWS에서 보안 인프라 및 서비스를 제공하므로 고객은 애플리케이션, 데이터 및 운영 체제 보안에 집중할 수 있습니다. 
 마지막으로, 기억해야 할 마지막 설계 원칙은 보안 모범 사례를 자동화하는 것입니다. 
 소프트웨어 기반 보안 메커니즘은 더욱 빠르고 비용 효율적으로 안전하게 확장하는 기능을 개선합니다. 
 예를 들어 가상 서버의 패치 적용 및 강화된 이미지를 만들어 저장한 다음, 해당 이미지가 필요할 때 이미 강화되고 패치 적용된 동일한 이미지를 사용하여 자동으로 새 인스턴스를 만드는 것이 권장됩니다. 
 또 다른 모범 사례는 정기적인 보안 이벤트와 비정상적인 보안 이벤트에 대한 응답을 자동화하는 것입니다. 
 다음은 안정성 핵심 요소에 대해 살펴보겠습니다. 
 안정성 핵심 요소에는 인프라 또는 서비스 장애로부터 복구할 수 있는 시스템의 기능이 포함됩니다. 
 또한 수요를 충족하고 중단 사태를 완화하기 위해 컴퓨팅 리소스를 동적으로 확보하는 기능에 중점을 둡니다. 
 그 결과, 장애로부터 복구하고 수요를 충족하는 기능을 지원하는 안정성을 얻을 수 있습니다. 
 클라우드의 안정성은 기반, 변경 관리 및 장애 관리라는 세 가지 영역으로 구성됩니다 안정성을 확보하려면 아키텍처 및 시스템이 수요 또는 요구 변화를 처리하며 장애를 탐지하고 자동으로 해결할 수 있는 잘 계획된 기반을 갖춰야 합니다. 
 모든 유형의 구조를 설계하기 전에는 건설 전에 미리 기반을 잘 살펴보는 것이 매우 중요합니다. 
 클라우드에서도 마찬가지로 모든 시스템을 설계하기 전에 안정성에 영향을 미치는 기본 요구 사항이 준비되어 있어야 합니다. 
 변경 관리의 경우 변경 사항이 시스템에 미치는 영향을 완전히 이해하고 인식하는 것이 중요합니다. 
 사전 계획을 세우고 시스템을 모니터링하면 빠르고 안정적으로 변경 사항을 수용하고 조정할 수 있습니다. 
 아키텍처가 안정적인지 확인하기 위해서는 장애를 예측하고 인식하며 대응하고 장애 발생을 방지하는 것이 중요합니다. 
 클라우드 환경에서는 자동화된 모니터링 기능을 활용하고 환경의 시스템을 교체하며 이후 장애가 발생한 시스템의 문제를 해결할 수 있습니다. 
 이 모든 작업이 안정적인 상태에서 낮은 비용으로 수행됩니다. 
 이제 안정성 설계 원칙을 알아보겠습니다. 
 안정성을 향상시킬 수 있는 설계 원칙에는 복구 절차 테스트가 포함됩니다. 
 클라우드에서 사용자는 시스템에 어떻게 장애가 발생하는지 테스트할 수 있으며 복구 절차를 확인할 수 있습니다. 
 사용자는 실제 장애가 발생하기 전에 다른 장애를 시뮬레이션하고 공개한 다음 대응할 수 있습니다. 
 다음으로, AWS는 장애로부터 자동으로 복구합니다. 
 AWS에서는 임계값이 초과될 때 자동 대응을 트리거할 수 있습니다. 
 이를 통해 장애가 발생하기 전에 미리 예측하여 해결할 수 있습니다. 
 다음 원칙은 수평적으로 확장하여 전체 시스템 가용성을 높이는 것입니다. 
 하나의 큰 리소스가 있을 때 이 리소스를 여러 개의 작은 리소스로 대체하여 단일 장애 지점이 전체 시스템에 미치는 영향을 줄이는 것이 유용합니다. 
 따라서 여기에서의 목표는 수평적으로 확장하고 여러 작은 리소스 간에 요구 사항을 분산하는 것입니다. 
 다음 설계 원칙은 용량에 대한 추측을 중단하는 것입니다. 
 클라우드 환경에서 고객은 수요와 시스템 사용률을 모니터링하고, 리소스 추가 또는 제거를 자동화할 수 있는 기능을 갖게 됩니다. 
 이를 통해 프로비저닝 과다 또는 부족 현상 없이 최적의 수준으로 수요를 충족할 수 있습니다. 
 마지막으로, AWS는 변경 사항과 자동화를 관리합니다. 
 아키텍처 및 인프라에 대한 변경은 자동화된 방식으로 수행되어야 합니다. 
 이렇게 하면 모든 단일 시스템 또는 리소스가 아닌 자동화 변경만 관리하면 됩니다. 
 이제 성능 효율성 핵심 요소를 알아보겠습니다. 
 클라우드에서 성능 효율성 핵심 요소는 선택, 검토, 모니터링 및 트레이드오프라는 네 가지 요소로 구성됩니다. 
 각 영역을 자세히 살펴보겠습니다. 
 선택의 경우 아키텍처를 최적화할 가장 적합한 솔루션을 선택하는 것이 중요합니다. 
 하지만 이러한 솔루션은 보유한 워크로드의 종류에 따라 달라집니다. 
 AWS에서는 리소스를 가상화할 수 있으며 다양한 유형 및 구성으로 솔루션을 사용자 지정할 수 있습니다. 
 검토를 통해 솔루션을 지속적으로 혁신하고 새롭게 사용 가능한 기술 및 접근 방식을 활용할 수 있습니다. 
 이렇게 새롭게 출시된 제품은 아키텍처의 성능 효율성을 향상시킬 수 있습니다. 
 모니터링의 경우, 아키텍처를 구현한 후 성능을 모니터링하여 고객이 문제의 영향을 받고 인식하기 전에 해당 문제를 해결할 수 있어야 합니다. 
 AWS를 사용하면 Amazon CloudWatch, Amazon Kinesis, Amazon Simple Queue Service(Amazon SQS) 및 AWS Lambda와 같은 도구를 사용하여 자동화를 사용하고 아키텍처를 모니터링할 수 있습니다. 
 마지막으로, 트레이드오프가 있습니다. 
 최적의 접근 방식을 보장하는 트레이드오프의 예는 일관성, 내구성 및 공간을 위해 시간 또는 지연 시간을 절충하여 높은 성능을 제공하는 것입니다. 
 이제 성능 효율성을 달성하는 데 도움이 되는 설계 원칙을 살펴보겠습니다. 
 먼저, 고급 기술의 대중화입니다. 
 기술에 대한 지식과 복잡성을 클라우드 업체가 제공하는 서비스로 극복하면서, 구현하기 어려운 기술도 간편하게 사용할 수 있습니다. 
 IT 팀은 새로운 기술을 호스팅하고 실행하는 방법을 배우는 대신, 이를 서비스로 사용하기만 하면 됩니다. 
 그 다음, AWS는 몇 분 만에 전 세계로 확장할 수 있습니다. 
 AWS를 사용하면 전 세계 여러 리전에 시스템을 쉽게 배포할 수 있으면서 최소의 비용으로 고객에게 더 낮은 지연 시간과 더 나은 경험을 제공할 수 있습니다. 
 성능 효율성을 달성하는 데 도움이 되는 다음 설계 원칙은 서버리스 아키텍처를 사용하는 것입니다. 
 클라우드 환경에서는 컴퓨팅 활동을 위해 기존 서버를 실행하고 유지 관리할 필요가 없습니다. 
 또한 운영 부담을 없애고 트랜잭션 비용을 낮출 수도 있습니다. 
 또 다른 설계 원칙은 더 자주 실험하는 것입니다. 
 가상화를 통해 테스트를 빠르게 수행하여 효율성을 높일 수 있습니다. 
 마지막으로, 기계적 동조라는 원칙이 있습니다. 
 이 원칙은 달성하려는 목표에 가장 부합하는 기술 접근 방식을 사용할 것을 제안합니다. 
 다음 핵심 요소는 전체 수명 주기에 걸쳐 지속적으로 시스템을 개선 및 개량하는 프로세스가 포함된 비용 최적화 요소입니다. 
 이 핵심 요소는 비용 효율적인 시스템을 구축 및 운영하고 투자수익률을 극대화할 수 있다는 아이디어를 포함합니다. 
 비용 최적화 핵심 요소는 비용 효율적인 리소스, 수요와 공급의 균형, 지출 인식 및 시간에 따른 최적화라는 네 가지 영역으로 구성됩니다. 
 완전하게 비용 최적화된 시스템은 기능 요구 사항을 충족하는 한편, 가장 낮은 가격대에서 최상의 성과를 달성하기 위해 모든 리소스를 사용합니다. 
 시스템이 적절한 서비스, 리소스 및 구성을 사용하고 있는지 확인하는 것이 비용 절감의 주요 요소 중 하나입니다. 
 사용자는 프로비저닝, 크기 조정, 구매 옵션 및 기타 특성과 같은 세부 정보에 집중하여 필요에 맞는 최적의 아키텍처를 보유하고 있는지 확인하고자 합니다. 
 비용 최적화의 또 다른 요소는 수요와 공급의 균형입니다. 
 AWS 클라우드를 사용하면 클라우드 아키텍처의 탄력성을 활용하여 변화하는 수요를 충족할 수 있습니다. 
 자동 조정을 하고 다른 서비스에서 알림을 받아 수요 변화로 인한 공급을 조정할 수 있습니다. 
 그 다음으로, 지출 인식이 있습니다. 
 비즈니스에서 발생하는 지출 및 비용 요인을 완전히 인식하고 인지하는 것이 매우 중요합니다. 
 따라서 현재 비용을 확인하고, 이해하고, 분석하고, 미래 비용을 예측하고, 그에 따른 계획을 세워야만 클라우드에서 아키텍처의 비용 최적화가 실현됩니다. 
 마지막으로, AWS에서는 시간에 따라 최적화할 수 있습니다. 
 모든 도구와 서로 다른 접근 방식을 사용하면 AWS 플랫폼에서 수집한 데이터를 바탕으로 아키텍처를 측정, 모니터링 및 개선할 수 있습니다. 
 비용 최적화 설계 원칙에 대해 살펴보겠습니다. 
 우리의 첫 번째 원칙은 소비 모델을 도입하는 것입니다. 
 소비 모델을 통해 사용하는 컴퓨팅 리소스에 대해서만 비용을 지불하고 비즈니스 요구 사항에 따라 증감할 수 있습니다. 
 다음 원칙은 전반적인 효율성을 측정하는 것입니다. 
 시스템의 비즈니스 생산량과 이를 제공하는 것과 관련된 비용을 측정한 다음 이 측정 결과를 통해 생산량 증가와 비용 절감으로 발생하는 이익을 이해하는 것이 중요합니다. 
 다음 설계 원칙은 데이터 센터 운영에 필요한 비용 지출을 중단하는 것을 의미합니다. 
 AWS를 사용하면 서버를 랙에 설치하고, 쌓아 올리고, 서버에 전원을 공급하는 등의 과중한 업무를 수행할 필요가 없습니다. 
 AWS가 대신 그러한 작업을 수행하므로, IT 인프라 대신 고객과 비즈니스 프로젝트에 완전히 집중할 수 있습니다. 
 다음 설계 원칙은 지출을 분석하고 부과하는 것입니다. 
 클라우드는 보다 손쉽게 시스템의 사용 및 비용을 정확하게 파악할 수 있게 해 줍니다. 
 고객은 투자수익률을 측정할 수 있으며, 이는 리소스를 최적화하고 비용을 절감할 수 있는 기회를 제공합니다. 
 마지막으로, 고객은 관리형 서비스를 사용하여 소유 비용을 줄이는 것이 좋습니다. 
 클라우드는 많은 관리형 서비스를 제공하여 이메일 전송 또는 데이터베이스 관리와 같은 작업을 위해 서버를 유지 관리하는 데 따른 운영 부담을 없애 줍니다. 
 모두 클라우드 규모로 운영되므로, 트랜잭션별 또는 서비스별 비용을 더 저렴하게 제공할 수 있습니다. 
 다음 핵심 요소는 운영 우수성입니다. 
 운영 우수성은 지속적으로 프로세스와 절차를 개선하여 비즈니스 가치를 제공하기 위해 시스템을 실행하고 모니터링하는 데 중점을 둡니다. 
 운영 우수성의 핵심 아이디어 중 일부는 변경 관리 및 자동화, 이벤트 응답, 일일 작업을 성공적으로 관리하기 위한 표준 정의를 포함합니다. 
 곧 공개될 백서에 운영 우수성 핵심 요소에 관한 추가 정보가 포함될 예정입니다. 
 이제 오늘 논의한 사항에 대해 살펴보겠습니다. 
 AWS Well Architected 프레임워크는 고객이 고유의 아키텍처를 평가 및 개선하고 설계에 대한 의사결정이 비즈니스에 미치는 영향을 더 확실히 파악할 수 있도록 돕기 위해 개발되었습니다. 
 지금까지 보안, 안정성, 성능 효율성, 비용 최적화 및 운영 우수성과 같은 AWS Well Architected 프레임워크를 구성하는 핵심 요소에 대해 알아보았습니다. 
 이 프레임워크에 대한 자세한 정보와 전략은 aws.amazon.com에서 확인할 수 있습니다. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Anna Fox였습니다. 
 
 - 내결함성 및 고가용성
 안녕하세요. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Anna Fox입니다. 
 이 동영상에서는 내결함성과 고가용성 아키텍처에 대해 살펴보겠습니다. 
 먼저, 내결함성과 고가용성의 의미에 대해 이야기해 보겠습니다. 
 내결함성이란 시스템의 일부 구성 요소에 장애가 발생해도 시스템이 계속 작동할 수 있는 기능을 의미합니다. 
 애플리케이션 구성 요소의 내장된 중복성이라고 할 수 있습니다. 
 그리고 전체 시스템에 대한 개념인 고가용성이 있습니다. 
 이 시스템의 목표는 시스템이 항상 작동하고 액세스 가능한 상태를 유지하며, 사용자의 개입 없이 중단 시간을 가능한 최소화하는 것입니다. 
 사이트를 단 1분 동안 사용할 수 없는 경우에도 비즈니스가 중대한 손상을 입을 수 있습니다. 
 하지만 Amazon Web Services는 이렇게 필요한 시간을 지원하는 도구를 제공합니다. 
 AWS 플랫폼은 사용자가 내결함성이 있고 가용성이 뛰어난 시스템 및 아키텍처를 이상적으로 구축할 수 있도록 해 줍니다. 
 AWS는 사용자가 최소한의 개입 및 선행 비용 투자로 이 시스템을 구축할 수 있다는 점에서 특별합니다. 
 또한 필요에 따라 사용자 지정할 수도 있습니다. 
  그러면 온프레미스 환경과 AWS의 가용성을 비교해 보겠습니다. 
 전통적으로 로컬 데이터 센터에서 고가용성을 보장하는 것은 비용이 많이 들며, 일반적으로 미션 크리티컬 애플리케이션에서만 보장됩니다. 
 하지만 AWS에서는 선택하는 서버 간에 가용성과 복구 가능성을 확장할 수 있는 옵션을 갖습니다. 
 여러 서버, 각 리전 내의 여러 가용 영역, 여러 리전에서 고가용성을 보장할 수 있으며, 원하는 경우 내결함성 서비스에 액세스할 수 있습니다. 
 AWS가 제공하는 본질적으로 가용성이 높은 서비스 및 적절한 아키텍처와 함께 사용 가능한 서비스를 확인할 수 있습니다. 
 그렇다면 일부 특정 서비스가 고가용성을 보장하는 데 도움을 줄 수 있는 방법을 자세히 알아보겠습니다. 
 다음 항목을 살펴봅니다. 
· 탄력적 로드 밸런서(ELB)
· 탄력적 IP 주소
· Amazon Route 53
· Auto Scaling
· Amazon CloudWatch

먼저 탄력적 로드 밸런서(ELB)가 있습니다. 
 ELB는 수신하는 트래픽, 즉 로드를 인스턴스에 분산하는 서비스입니다. 
 또한 ELB는 관리형 모니터링 서비스인 Amazon CloudWatch에 측정치를 전송할 수 있습니다. 
 이 내용은 이 모듈의 후반부에서 더 자세히 설명하겠습니다. 
 따라서 ELB는 트리거 역할을 할 수 있으며 높은 지연 시간 또는 서버가 과도하게 사용되는 경우에 사용자에게 알릴 수 있습니다. 
 ELB를 사용자 지정할 수도 있습니다. 
 예를 들어, 인스턴스에서 비정상적인 측정치 또는 특정 측정치를 인식하도록 구성할 수 있습니다. 
 퍼블릭 또는 내부 솔루션이 될 수 있습니다. 
 마지막으로, 여러 개의 서로 다른 프로토콜을 사용할 수 있습니다. 
그 다음, 탄력적 IP 주소가 있습니다. 
 탄력적 IP 주소는 애플리케이션에 대해 높은 내결함성을 제공하는 데 유용합니다. 
 탄력적 IP는 동적 클라우드 컴퓨팅에 적합하게 설계된 고정 IP 주소입니다. 
 이 도구를 통해 사용자가 같은 IP 주소를 가진 대체 리소스를 사용할 수 있게 함으로써 인스턴스 또는 소프트웨어의 장애를 숨길 수 있습니다. 
 탄력적 IP 주소를 사용하면 인스턴스에 장애가 발생해도 클라이언트가 여전히 애플리케이션에 액세스할 수 있으므로 고가용성이 보장됩니다. 
  또 다른 서비스는 Amazon Route 53입니다. 
 Amazon Route 53은 AWS에서 제공하는 신뢰할 수 있는 DNS 서비스입니다. 
 이 서비스는 도메인 이름을 IP 주소로 변환하는 데 사용됩니다. 
 Amazon Route 53은 최고 수준의 가용성을 염두에 두고 설계 및 유지 관리됩니다. 
 간단한 출력, 지연 시간 기반 라우팅, 상태 확인 및 DNS 장애 조치 및 지리 위치 라우팅을 지원하기 위해 개발되었습니다. 
 이러한 모든 특성은 고객용 애플리케이션의 가용성을 높입니다. 
Auto Scaling은 지정된 조건에 따라 인스턴스를 시작 또는 종료합니다. 
 이 서비스는 고객 수요의 변화에 따라 조정 및 수정할 수 있는 유연한 시스템을 구축하는 데 도움을 주기 위해 설계되었습니다. 
 Auto Scaling을 사용하면 새 리소스를 생성하는 데 대한 제한을 방지할 수 있습니다. 
 대신, 새로운 온디맨드 리소스를 만들거나 예약된 프로비저닝을 사용할 수 있습니다. 
 이를 통해 로드와 관계없이 언제든 애플리케이션과 시스템을 사용할 수 있습니다. 
 또한 정책에 따라 프로비저닝을 확장 또는 축소할 수 있다는 점을 염두에 두어야 합니다. 
 마지막으로, Amazon CloudWatch가 있습니다. 
 Amazon CloudWatch는 분산된 통계 수집 시스템입니다. 
 애플리케이션의 지표를 수집하고 추적합니다. 
 또 다른 특징은 자체 사용자 지정 지표를 생성하고 사용할 수 있다는 것입니다. 
 지연 시간이 높거나 설정한 임계값을 초과한 지표가 있으면 CloudWatch가 자동으로 조정하여 아키텍처의 고가용성을 보장할 수 있습니다. 
 이제 몇 가지 내결함성 도구를 살펴보겠습니다. 
 먼저 내결함성 애플리케이션의 핵심 요소로 사용될 수 있는 Amazon Simple Queue Service(Amazon SQS)가 있습니다. 
 이는 매우 안정적인 분산 메시징 시스템입니다. 
 Amazon SQS는 대기열을 항상 사용할 수 있도록 도와줍니다. 
 그리고 높은 내구성과 내결함성 데이터 스토리지를 제공하는 Amazon Simple Storage Service(Amazon S3)가 있습니다. 
 간편하게 사용할 수 있고 실제 사용하는 스토리지에 대해서만 비용을 지불하는 웹 서비스입니다. 
 Amazon S3는 한 리전의 여러 시설에 걸쳐 서로 다른 여러 디바이스에 모든 데이터를 중복 저장하므로 장애가 발생하는 경우에도 모든 정보에 계속 액세스할 수 있습니다. 
 Amazon SimpleDB는 내결함성과 내구성을 갖춘 체계적인 데이터 스토리지 솔루션입니다. 
 SimpleDB를 사용하면 확장 가능한 서비스를 최대한 활용할 수 있으며 고가용성과 내결함성을 위한 자연스러운 설계를 통해 단일 장애 지점을 방지할 수 있습니다. 
 그런 다음 관계형 데이터베이스와 관련하여 사용할 수 있는 또 다른 웹 서비스 도구인 Amazon Relational Database Service(Amazon RDS)가 있습니다. 
 중요한 데이터베이스의 안정성을 향상시키는 몇 가지 기능을 제공함으로써 고가용성 및 내결함성을 제공합니다. 
 이러한 기능 중 일부에는 자동 백업, 스냅샷 및 다중 가용 영역 배포를 포함됩니다. 
 이러한 모든 서비스는 고가용성 및 내결함성 시스템을 보장할 수 있는 높은 안정성과 내구성을 갖춘 내결함성 도구입니다. 
 내결함성 및 고가용성을 갖춘 아키텍처를 구축하는 것은 그렇게 어렵지 않습니다. 
 AWS 플랫폼을 사용하면 제공되는 서비스 및 도구를 활용하여 애플리케이션 및 시스템의 고가용성과 내결함성을 높일 수 있습니다. 
 고가용성 및 내결함성을 갖춘 서비스 및 아키텍처에 대한 자세한 내용은 <http://aws.amazon.com/ko>에서 확인할 수 있습니다. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Anna Fox였습니다.
 
 - 웹 호스팅
 안녕하세요. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Anna Fox입니다. 
 오늘은 웹 호스팅에 대해 간략히 알아보겠습니다. 
 기존 방식의 확장 가능한 웹 호스팅은 비용이 높고 시간이 많이 소요되며 어려운 프로세스일 수 있습니다. 
 하지만 반드시 그런 방식을 사용할 필요는 없습니다. 
 AWS 클라우드의 웹 호스팅은 빠르고 간단하며 쉽고 비용이 적게 듭니다. 
 컴퓨팅, 스토리지, 데이터베이스 및 애플리케이션 서비스 같은 종합적인 AWS 도구 및 서비스 세트를 사용하여 쉽게 배포하고 유지 관리할 수 있습니다. 
 소셜 미디어 앱을 개발하는 경우 호스팅할 수 있는 웹 애플리케이션에는 회사 웹 사이트, 콘텐츠 관리 시스템 또는 내부 SharePoint 사이트가 포함됩니다. 
웹 호스팅 시 발생할 수 있는 몇 가지 공통된 문제와 난관이 있습니다. 
 그중 일부는 인프라, 아키텍처 문제 및 최종 비용과 관련이 있을 수 있습니다. 
 하지만 AWS는 이러한 공통된 문제에 대한 솔루션을 제공할 수 있습니다. 
 피크 및 비용 효율적인 방식으로 이러한 피크를 처리하는 방법에 대한 공통의 딜레마가 나타나곤 합니다. 
 피크 및 대규모 고객 수요라는 큰 문제를 가질 수 있는데, 이는 여전히 중대한 문제가 될 수 있습니다. 
 기존의 방식으로 이러한 피크 용량을 처리하기 위해서는 여러 서버를 프로비저닝해야 할 것입니다. 
 이렇게 되면 일부 피크 시간 동안 리소스에 많은 시간과 돈을 소비하게 될 수 있습니다. 
 하지만 AWS는 추가 서버의 온디맨드 프로비저닝을 활용할 수 있으므로 고객은 용량과 비용을 실제 트래픽 패턴에 맞게 조정할 수 있습니다. 
 예를 들어, 웹 애플리케이션이 오전에는 느리지만 오후 5시 전후에 피크에 도달하면 트래픽 추이를 기반으로 해당 시간에 필요한 리소스를 프로비저닝할 수 있습니다. 
 이로 인해 용량의 낭비가 제한되며 비용이 50% 이상 줄어들 수 있습니다. 
이제 웹 호스팅 아키텍처는 비용 효율적일 뿐 아니라 확장 가능한 솔루션이 될 수 있습니다. 
 트래픽이 급증할 때에는 실행이 필요한 인스턴스가 있는지 확인하는 것이 매우 중요합니다. 
 이렇게 트래픽이 예기치 않게 급증하는 경우 적절한 시간에 대응할 수 없는 기존의 아키텍처를 피할 수 있습니다. 
 AWS를 사용하면 새로운 호스트를 시작하고 몇 분 안에 활용하도록 준비하며 불필요한 경우 축소할 수 있습니다. 
 리소스 테스트와 관련된 또 다른 공통된 문제도 방지할 수 있습니다. 
 사전 프로덕션, 베타 또는 테스트 환경을 개발할 때에는 많은 비용과 시간이 소요될 수 있습니다. 
 또한 리소스를 사용하지 않을 경우 많은 대금을 낭비하게 될 수 있습니다. 
 대신 AWS 클라우드를 사용하면 필요할 때 테스팅 집합을 프로비저닝할 수 있습니다. 
 서비스 중단이 거의 없거나 전혀 없는 준비 환경을 빠르게 개발할 수 있습니다. 
 AWS 클라우드의 또 다른 장점은 로드 테스트 중에 사용자 트래픽을 시뮬레이션할 수 있다는 것입니다. 
따라서 이와 같습니다. 
 기존 웹 호스팅 아키텍처를 AWS 클라우드로 전송하기로 결정했다고 가정해 봅니다. 
 전송 시 유용할 수 있는 일부 AWS 제품을 살펴보겠습니다. 
 웹 호스팅에 대해 고려할 수 있는 서비스는 다음과 같습니다. 
· Amazon Virtual Private Cloud(Amazon VPC)
· Amazon Route 53
· Amazon CloudFront
· Elastic Load Balancing(ELB)
· 방화벽/AWS Shield
· Auto Scaling
· 앱 서버/Elastic Compute Cloud(Amazon EC2) 인스턴스 
· Amazon ElastiCache
· Amazon Relational Database Services(Amazon RDS)
· Amazon Dynamo DB

각 서비스의 특성에 대한 자세한 내용은 제품 아래의 AWS 홈 페이지에서 확인할 수 있습니다. 
현재 AWS 클라우드를 사용할 때 염두에 둘 주요 고려 사항이 많이 있습니다. 
 이제 클라우드에서 호스팅할 때 이러한 주요 아키텍처 변화에 대해 살펴보겠습니다. 
· 먼저, 물리적 네트워크 어플라이언스가 제거됩니다. 
 이에 더해 물리적 디바이스에 AWS 애플리케이션용 방화벽, 라우터 및 로드 밸런서를 더 이상 가질 수 없으며, 대신 소프트웨어 솔루션으로 교체해야 합니다. 
· 그 다음, 어디에나 방화벽이 있어야 합니다. 
 AWS는 더 안전한 모델을 적용하므로 모든 호스트가 차단되어 있는지 확인합니다. 
 특정 트래픽을 허용하거나 거부하는 보안 그룹을 만들 수 있지만, 해당 정책을 적용해야 합니다. 
 · 그런 다음 여러 데이터 센터의 가용성을 고려해 보는 것이 좋습니다. 
 가용 영역 및 AWS 리전을 통해 이러한 위치에 애플리케이션을 쉽게 배포하여 높은 가용성과 안정성을 보장할 수 있습니다. 
 · 고려해야 할 다음 아이디어는 가장 중요한 것일 수 있는데, 바로 호스트와 관련된 것입니다. 
 AWS를 사용하면 호스트는 임시적이고 동적인 것으로 간주됩니다. 
 오늘 모듈에서는 웹 호스팅 및 AWS 클라우드가 비용 효율적이고 확장 가능한 온디맨드 솔루션을 지원할 수 있는 방법에 대해 간략히 알아보았습니다. 
 또한 웹 호스팅 아키텍처를 지원할 수 있는 AWS 서비스에 대해 알아보았습니다. 
 마지막으로, AWS 웹 호스팅과 관련한 주요 고려 사항에 대해서도 알아보았습니다. 
 웹 호스팅에 대한 자세한 내용은 <http://aws.amazon.com/ko>에서 확인할 수 있습니다. 
 또한 다음의 URL에서 AWS 클라우드의 웹 애플리케이션 호스팅과 관련된 백서 같은 추가 리소스도 검토할 수 있습니다: <https://aws.amazon.com/whitepapers/web-application-hosting-best-practices/>. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Anna Fox였습니다. 

 

 


 
 - 요금 기본 정보·
 안녕하세요. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Jody Soeiro de Faria입니다. 
 오늘은 AWS 요금 기본 정보에 대해 알아보겠습니다. 
 AWS는 다양한 클라우드 컴퓨팅 서비스를 제공합니다. 
 각 서비스에서 고객은 정확히 실제로 사용한 리소스 양에 대해 비용을 지불합니다. 
 이러한 유틸리티 스타일 가격 결정에 포함되는 항목은 다음과 같습니다. 
· 종량 과금제
· 예약하는 경우 지불 비용 감소
· 더 많이 사용할수록 단위당 더 적은 비용 지불
· AWS 규모가 커짐에 따라 더 적은 비용 지불

이러한 요금의 핵심 개념에 대해 좀 더 자세히 살펴보겠습니다. 
 데이터 센터 구축 사업을 하는 경우를 제외하고는 아마 지금까지 데이터 센터 구축에 너무 많은 시간과 비용을 소비했을 것입니다. 
 AWS에서는 서버나 인프라 구매 또는 시설 임대를 비롯하여 고가의 인프라 구축에 소중한 리소스를 쏟아부을 필요가 없습니다. 
 AWS에서는 대규모 초기 비용을 저렴한 가변 비용으로 대체할 수 있고 필요한 기간 동안 사용한 만큼만 비용을 지불하면 됩니다. 
 모든 AWS 서비스는 온디맨드로 제공되며, 장기 계약을 맺지 않아도 되고, 복잡한 라이선스에 의존할 필요가 없습니다. 
사용한 만큼 지불하는 종량 과금제를 통해 예산을 과도하게 할당하지 않고도 변화하는 비즈니스 요구에 손쉽게 대응하고. 
변화에 대한 응답성을 개선할 수 있습니다. 
 종량 과금제 모델에서는 예측치가 아닌 정확한 수요에 따라 비즈니스에 대응할 수 있으므로 위험이나 초과 프로비저닝 또는 누락되는 용량을 줄일 수 있습니다. 
필요한 만큼만 서비스 요금을 지불함으로써 혁신 및 발명에 집중할 수 있으므로 조달 복잡성을 줄이고 비즈니스에 완전한 탄력성을 부여할 수 있습니다. 
Amazon EC2 및 Amazon RDS와 같은 특정 서비스의 경우, 예약 용량에 투자할 수 있습니다. 
 예약 인스턴스의 경우 동일한 온디맨드 용량과 비교하여 최대 75%까지 절감할 수 있습니다. 
 예약 인스턴스는 3가지 옵션인, 전체 선결제 금액(AURI), 부분 선결제 금액(PURI), 선결제 금액 없음(NURI)으로 제공됩니다. 
예약 인스턴스를 구입할 때 선결제 금액이 클수록 할인도 커집니다. 
 절감액을 최대화하려면 전체를 선결제 금액으로 지불하고 가장 큰 폭의 할인을 받으면 됩니다. 
 부분 선결제 금액 RI는 할인폭은 낮지만 미리 지불하는 금액이 적습니다. 
 마지막으로, 선결제 금액을 내지 않고 작은 폭의 할인을 받지만, 자금을 확보하여 다른 프로젝트에 사용하도록 선택할 수 있습니다. 
 예약 용량을 사용함으로써 조직은 위험을 최소화하고, 예산을 좀 더 예측 가능하게 관리하며, 장기 약정을 요구하는 정책을 준수할 수 있습니다. 
AWS에서는 규모에 따른 할인을 받을 수 있으며 사용량이 늘수록 의미 있는 비용 절감이 실현됩니다. 
 Amazon S3 및 EC2에서 데이터 송신과 같은 서비스의 경우 요금이 계층화되어 있습니다. 
 즉, 사용량이 많을수록 기가바이트당 비용이 저렴해집니다. 
 또한, 데이터 수신 요금은 언제나 무료입니다. 
 따라서 AWS 사용 요구가 증가함에 따라 도입을 확장하면서 비용은 통제할 수 있는 규모의 경제 이점을 활용할 수 있습니다. 
조직이 성장함에 따라 AWS에서는 고객이 비즈니스 요구를 처리하는 데 도움이 되는 서비스를 확보할 수 있도록 옵션을 제공합니다. 
 예를 들어 AWS의 스토리지 서비스 포트폴리오는 데이터 액세스 빈도와 데이터 검색에 필요한 성능에 따라 비용을 줄일 수 있는 옵션을 제공합니다. 
 비용 절감을 최적화하기 위해서는 성능, 보안 및 안정성을 유지하면서 비용을 절감할 수 있는 적절한 스토리지 솔루션의 조합을 선택해야 합니다. 
AWS는 지속적으로 데이터 센터 하드웨어 비용을 줄이고, 운영 효율성을 향상하며, 전력 소비를 줄이고, 비즈니스 운영 비용을 낮추는 데 중점을 두고 있습니다. 
 이러한 최적화 및 내실 있게 성장하는 AWS의 규모의 경제로 인해 얻은 절약 효과를 요금 인하의 형태로 고객에게 돌려드립니다. 
 2006년 이래 AWS는 44번 요금을 인하했습니다. 
AWS는 모든 고객에게 서로 다른 요구 사항이 있다는 점을 알고 있습니다. 
 프로젝트에 대해 적합한 AWS의 요금 모델이 없는 경우 고유한 요구 사항이 있는 대용량 프로젝트에 대해 요금을 사용자 지정할 수 있습니다. 
 신규 AWS 고객이 클라우드 사용을 시작하는 데 도움이 될 수 있도록 AWS에서 프리 티어를 제공하고 있습니다. 
 신규 AWS 고객은 프리 티어를 사용하여 Amazon EC2 마이크로 인스턴스를 1년 동안 무료로 실행할 수 있을 뿐 아니라 Amazon S3, Amazon Elastic Block Store(Amazon EBS), Elastic Load Balancing(ELB), AWS 데이터 전송 및 기타 AWS 서비스에서 프리 티어를 활용할 수 있습니다. 
AWS에서는 추가 비용 없이 다양한 서비스를 제공합니다. 
· Amazon VPC: Amazon Virtual Private Cloud는 고객이 정의하는 가상 네트워크에서 AWS 리소스를 시작할 수 있도록 AWS 클라우드에서 로컬로 격리된 공간을 프로비저닝합니다. 
· AWS Elastic Beanstalk는 AWS 클라우드에서 애플리케이션을 간편하고 신속하게 배포하고 관리할 수 있는 방법입니다. 
· AWS CloudFormation은 개발자 및 시스템 관리자가 관련 AWS 리소스 모음을 쉽게 생성하고 순서에 따라 예측 가능한 방식으로 프로비저닝하도록 지원합니다. 
· AWS Identity and Access Management(IAM)는 AWS 서비스와 리소스에 대한 사용자 액세스를 제어합니다. 
· Auto Scaling은 정의한 조건에 따라 Amazon Elastic Compute Cloud(Amazon EC2) 인스턴스를 자동으로 추가하거나 제거합니다. 
 Auto Scaling을 사용하면 요청이 급증하는 동안에는 매끄럽게 Amazon EC2 인스턴스의 개수가 늘어 성능을 유지하고, 요청이 감소할 때는 자동으로 인스턴스의 개수가 줄어 비용을 최소화할 수 있습니다. 
· AWS OpsWorks는 모든 형태와 규모의 애플리케이션을 간편하게 배포하고 운영할 수 있도록 해 주는 애플리케이션 관리 서비스입니다. 
또한 통합 결제를 사용하여 모든 계정을 통합하고 계층화 이점을 얻을 수 있습니다. 
AWS가 제공하는 서비스의 수와 유형은 크게 증가했지만 요금에 대한 철학은 변하지 않았습니다. 
 매월 말에, 사용한 만큼만 비용을 지불하고 언제든지 제품 사용을 시작하거나 중단할 수 있습니다. 
 장기 계약은 필요 없습니다. 
 AWS 웹 사이트에 각 서비스에 대한 요금 정보가 제공됩니다(<http://aws.amazon.com/pricing/>). 
 각 서비스별로 독립적인 AWS의 요금 전략은 각 프로젝트에 필요한 서비스를 선택하고 사용한 만큼만 비용을 지불할 수 있는 엄청난 유연성을 제공합니다. 
 저는 Amazon Web Services(AWS) 교육 및 자격증 팀의 Jody Soeiro de Faria였습니다.
 
 - 요금내역
 안녕하세요. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Jody Soeiro de Faria입니다. 
 오늘은 AWS 요금의 세부 정보에 대해 알아보겠습니다. 
 AWS 사용 시 지불해야 하는 세 가지 기본 특성은 컴퓨팅, 스토리지 및 데이터 송신입니다. 
 이러한 특성은 사용하는 AWS 제품에 따라 다릅니다. 
 하지만 기본적으로 비용에 가장 큰 영향을 미치는 핵심 특성이기도 합니다. 
데이터 송신에 대해 요금이 청구되지만 인바운드 데이터 전송 또는 동일 리전 내에서 서비스 간의 데이터 전송에 대해서는 청구되지 않습니다. 
 아웃바운드 데이터 전송은 Amazon EC2, Amazon S3, Amazon RDS, Amazon SimpleDB, Amazon SQS, Amazon SNS 및 Amazon VPC 전체에서 집계된 후 아웃바운드 데이터 전송 요금으로 청구됩니다. 
 이 요금은 월별 명세서에 AWS 데이터 송신으로 표시됩니다. 
Amazon Elastic Compute Cloud(Amazon EC2), Amazon Simple Storage Service(Amazon S3), Amazon Elastic Block Store(Amazon EBS), Amazon Relational Database Service(Amazon RDS) 및 Amazon CloudFront 같은 일반적으로 사용되는 AWS 제품의 요금 특징에 대해 자세히 살펴보겠습니다. 
Amazon EC2는 클라우드에서 크기 조정이 가능한 컴퓨팅 파워를 제공하는 웹 서비스입니다. 
 Amazon EC2의 간단한 웹 서비스 인터페이스를 통해 간편하게 필요한 컴퓨팅 파워를 확보하고 구성할 수 있습니다. 
 Amazon의 입증된 컴퓨팅 환경에서 컴퓨팅 리소스에 대한 완전한 제어를 제공합니다. 
 Amazon EC2는 실제 사용한 용량에 대해서만 고객에게 요금을 부과하므로 컴퓨팅 비용이 절약됩니다. 
 Amazon EC2 비용을 추정할 때는 다음을 고려해야 합니다. 
· 시계로 표시되는 서버 시간. 
 리소스는 실행 중일 때 요금이 부과됩니다. 
 예를 들어, Amazon EC2 인스턴스가 시작되는 시간부터 인스턴스가 종료될 때까지 또는 탄력적 IP가 할당된 시간부터 해제될 때까지 시간입니다. 
· 머신 구성. 
 선택하는 Amazon EC2 인스턴스의 물리적 용량을 고려합니다. 
 인스턴스 요금은 AWS 리전, OS, 코어 수 및 메모리에 따라 달라집니다. 
· 머신 구입 유형. 
 온디맨드 인스턴스를 사용하면 필수적인 최소 약정 없이 시간 단위로 컴퓨팅 파워를 구입할 수 있습니다. 
 예약 인스턴스는 예약하고자 하는 모든 인스턴스에 대해 일시불로 결제하거나 선결제 금액이 없는 옵션을 제공하므로 해당 인스턴스의 시간별 사용 요금이 상당히 할인되는 효과를 얻을 수 있습니다. 
 스팟 인스턴스의 경우 미사용 Amazon EC2 용량에 대해 입찰할 수 있습니다. 
 · 인스턴스 수. 
 피크 로드를 처리하기 위해 Amazon EC2 및 Amazon EBS 리소스의 여러 인스턴스를 프로비저닝할 수 있습니다. 
· 로드 밸런싱. 
 탄력적 로드 밸런서는 Amazon EC2 인스턴스 간에 트래픽을 분산할 때 사용할 수 있습니다. 
 탄력적 로드 밸런서가 실행되는 시간과 처리하는 데이터의 양은 월별 비용에 영향을 미칩니다. 
· 세부 모니터링. 
 Amazon CloudWatch를 사용하여 Amazon EC2를 모니터링할 수 있습니다. 
 기본적으로, 기본 모니터링은 추가 비용 없이 활성화되어 있으며, 사용 가능합니다. 
 하지만 고정 월별 요금의 경우 1분에 1회 기록되는 사전 선택된 지표 7개를 포함하는 세부 모니터링을 선택할 수 있습니다. 
 한 달을 채우지 못한 경우 인스턴스당 시간별 요금을 비율로 해서 청구합니다. 
· Auto Scaling. 
 Auto Scaling은 사용자가 정의한 조건에 따라 배포에서 Amazon EC2 인스턴스 수를 자동으로 조정합니다. 
 이 서비스는 Amazon CloudWatch 요금 외의 추가 비용 없이 사용 가능합니다. 
· 탄력적 IP 주소. 
 실행 중인 인스턴스에 연결된 탄력적 IP 주소 한 개는 무료로 사용할 수 있습니다. 
· 운영 체제 및 소프트웨어 패키지. 
 운영 체제 요금은 인스턴스 요금에 포함됩니다. 
 AWS를 사용하면 Microsoft, IBM 및 기타 여러 공급업체와 파트너 관계를 맺고 Amazon EC2 인스턴스에서 실행되는 특정 상용 소프트웨어 패키지(예: Windows의 Microsoft SQL Server)를 쉽게 실행할 수 있습니다. 
 비표준 운영 체제, Oracle 애플리케이션, Microsoft SharePoint 및 Microsoft Exchange와 같은 Windows Server 애플리케이션 등 AWS에서 제공하지 않는 상용 소프트웨어 패키지의 경우 공급업체로부터 라이선스를 획득해야 합니다. 
 또한 Microsoft License Mobility through Software Assurance Program과 같은 특정 공급업체 프로그램을 통해 기존 라이선스를 클라우드로 가져올 수도 있습니다. 
Amazon S3는 인터넷용 스토리지입니다. 
 언제든지 웹상 어디서나 용량과 관계없이 데이터를 저장하고 검색하는 데 사용할 수 있는 단순한 웹 서비스 인터페이스를 제공합니다. 
 Amazon S3 비용을 추정할 때는 다음을 고려해야 합니다. 
· 스토리지 클래스. 
 표준 스토리지는 99.999999999%의 내구성과 99.99%의 가용성을 제공하도록 설계되었습니다. 
 Standard - Infrequent Access(SIA)는 액세스 빈도가 낮은 데이터를 Amazon S3 표준 스토리지보다 약간 낮은 수준의 중복성으로 저장함으로써 비용을 절감할 수 있는 Amazon S3의 스토리지 옵션입니다. 
 Standard - Infrequent Access는 지정된 한 해 동안 Amazon S3와 동일한 99.999999999%의 내구성과 99.99%의 가용성을 제공하도록 설계되었습니다. 
 각 클래스마다 비율이 다르다는 점에 유의해야 합니다. 
· 스토리지. 
 Amazon S3 버킷에 저장되는 객체의 수와 크기 및 스토리지 유형입니다. 
· 요청. 
 요청의 수 및 유형입니다. 
 GET 요청은 PUT 및 COPY 요청과 같은 다른 요청과 다른 비율로 요금이 발생합니다. 
· 데이터 전송. 
 Amazon S3 리전에서 송신된 데이터의 양입니다. 
Amazon EBS는 Amazon EC2 인스턴스에 사용할 수 있는 블록 수준의 스토리지 볼륨을 제공합니다. 
 Amazon EBS 볼륨은 인스턴스 수명과 관계없는 지속되는 오프 인스턴스 스토리지입니다. 
 클라우드의 가상 디스크와 유사합니다. 
 Amazon EBS는 범용(SSD), 프로비저닝된 IOPS(SSD), 마그네틱이라는 세 가지 볼륨 유형을 제공합니다. 
 세 개의 볼륨 유형은 성능 특성과 비용이 각각 다르므로 애플리케이션 요구 사항에 맞는 올바른 스토리지 성능과 요금을 선택할 수 있습니다. 
 Amazon EBS 비용을 추정할 때는 다음을 고려해야 합니다. 
· 볼륨. 
  모든 Amazon EBS 볼륨의 볼륨 스토리지에 대해서는 스토리지 해제 시점까지 매월 프로비저닝하는 용량(GB)을 기준으로 요금이 청구됩니다. 
 · 초당 입출력 작업(IOPS). 
 I/O는 범용 볼륨의 요금에 포함되는 한편 EBS 마그네틱 볼륨의 경우 I/O는 볼륨에 대해 요청을 수행한 수에 따라 청구됩니다. 
 프로비저닝된 IOPS 볼륨의 경우, IOPS에서 프로비저닝한 양에 해당 달에 프로비저닝한 날의 비율을 곱한 만큼의 요금도 청구됩니다. 
 · 스냅샷. 
 Amazon EBS를 통해 데이터의 스냅샷을 Amazon S3로 백업하여 데이터를 안정적으로 복구할 수 있습니다. 
 EBS 스냅샷을 선택하는 경우 추가 비용은 저장된 데이터의 월별 기가바이트당 비용입니다. 
 · 데이터 전송. 
 애플리케이션의 송신된 데이터의 양을 고려합니다. 
 인바운드 데이터 전송은 무료이며 아웃바운드 데이터 전송은 계층화됩니다. 
 Amazon RDS는 클라우드에서 관계형 데이터베이스를 손쉽게 설치, 운영 및 확장할 수 있게 해 주는 웹 서비스입니다. 
 시간 소모적인 데이터베이스 관리 작업을 관리하는 한편, 효율적인 비용으로 크기 조정이 가능한 용량을 제공하므로 애플리케이션과 비즈니스에 좀 더 집중할 수 있습니다. 
 Amazon RDS 비용을 추정할 때는 다음을 고려해야 합니다. 
 · 시계로 표시되는 서버 시간. 
 리소스는 실행 중일 때 요금이 부과됩니다. 
 예를 들어, DB 인스턴스가 시작되는 시간부터 DB 인스턴스가 종료될 때까지 시간입니다. 
 · 데이터베이스 특성. 
 선택한 데이터베이스의 실제 용량은 청구되는 비용에 영향을 줍니다. 
 데이터베이스 특성은 데이터베이스 엔진, 크기 및 메모리 클래스에 따라 달라집니다. 
 · 데이터베이스 구입 유형. 
 온디맨드 DB 인스턴스를 구입할 때 필수적인 최소 약정 없이 데이터베이스 인스턴스에서 실행되는 각 시간당 컴퓨팅 파워에 대해서만 요금을 지불합니다. 
 예약 DB 인스턴스는 1년 또는 3년 약정으로 예약하려는 각 DB 인스턴스에 대해 저렴한 사전 확약금을 일시불로 결제할 수 있습니다. 
 · 데이터베이스 인스턴스 수. 
 Amazon RDS를 사용하면 피크 로드를 처리하기 위해 여러 데이터베이스 인스턴스를 프로비저닝할 수 있습니다. 
 · 프로비저닝된 스토리지. 
 활성 DB 인스턴스에 대해 프로비저닝된 데이터베이스 스토리지의 최대 100%까지는 백업 스토리지에 대한 추가 비용이 없습니다. 
 DB 인스턴스가 종료된 후 백업 스토리지에는 월별 기가바이트당 요금이 청구됩니다. 
 · 추가 스토리지. 
 프로비저닝된 스토리지 용량 외에 백업 스토리지의 양은 월별로 기가바이트당 청구됩니다. 
 · 요청. 
 데이터베이스에 대한 입력 및 출력의 요청 수입니다. 
 · 배포 유형. 
 데이터베이스 인스턴스를 단일 가용 영역(독립 실행형 데이터 센터와 유사) 또는 다중 가용 영역(향상된 데이터 내구성 및 가용성에 대해 보조 데이터 센터와 유사)에 배포할 수 있습니다. 
 스토리지 및 I/O 요금은 배포 대상인 가용 영역 수에 따라 달라집니다. 
 · 데이터 전송. 
 인바운드 데이터 전송은 무료이며 아웃바운드 데이터 전송 비용은 계층화됩니다. 
 애플리케이션의 요구 사항에 따라 예약 Amazon RDS 데이터베이스 인스턴스를 구입하여 Amazon RDS 데이터베이스 인스턴스 비용을 최적화할 수 있습니다. 
 예약 인스턴스를 구입하기 위해 예약하려는 각 인스턴스에 대해 저렴한 금액을 일시불로 결제하여, 해당 인스턴스의 시간당 사용 요금이 상당히 할인되는 효과를 얻을 수 있습니다. 
Amazon CloudFront는 콘텐츠 전송을 위한 웹 서비스입니다. 
 다른 Amazon Web Services와 통합되므로, 낮은 지연 시간과 빠른 데이터 전송 속도로 최종 사용자에게 콘텐츠를 편리하게 배포할 수 있으며, 최소 약정이 필요 없습니다. 
 Amazon CloudFront 비용을 추정할 때는 다음을 고려해야 합니다. 
 · 트래픽 배포. 
 데이터 전송 및 요청 요금은 지리적 리전에 따라 다르며, 요금은 콘텐츠가 제공되는 엣지 로케이션을 기반으로 합니다. 
 · 요청. 
 요청한 수 및 유형과 요청이 발생한 지리적 리전입니다. 
 · 데이터 송신. 
 Amazon CloudFront 엣지 로케이션에서 송신된 데이터의 양입니다. 
 비용을 추정하는 가장 좋은 방법은 각 AWS 서비스의 기본 특성을 살펴보고, 각 특성에 대한 사용량을 추정한 다음, 해당 사용량을 웹 사이트에 게시된 요금에 매핑하는 것입니다. 
 저는 Amazon Web Services(AWS) 교육 및 자격증 팀의 Jody Soeiro de Faria였습니다.
 
 - TCO 계산기
 안녕하세요. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Anna Fox입니다. 
 이 동영상에서는 AWS 총 소유 비용(TCO) 계산기에 대해 알아보겠습니다. 
 TCO 계산기는 다음과 같은 작업에 도움이 되는 도구입니다. 
· AWS를 사용할 때의 비용 절감을 추정
· 임원 프레젠테이션에 사용할 수 있는 상세 보고서 세트를 제공
· 요구 사항에 가장 잘 맞게 가정 수정
더 자세히 알아보겠습니다. 
 TCO 계산기는 <https://awstcocalculator.com>에 방문해서 시작할 수 있습니다. 
 TCO 계산기에서 사용할 수 있는 옵션을 간단히 살펴보겠습니다. 
 이 탭을 클릭하고 Basic과 Advanced 사이에 전환하면 아래에 VM Usage, Optimize By 및 Host와 관련된 Advanced의 추가 옵션이 표시됩니다. 
 그럼 Basic부터 시작해 보겠습니다. 
 여기서 Currency를 선택한 다음 비교하려는 환경 유형을  On-Premises 또는 Colocation 중에서 선택합니다. 
 그런 다음 비즈니스 요구 사항에 가장 적합한 Region을 선택합니다. 
 Workload Type: General 또는 SharePoint Site를 선택할 수 있습니다. 
 그 다음, 물리적 서버 또는 가상 머신의 비교 중에서 선택합니다. 
 Virtual Machines를 선택해 보겠습니다. 
 이제 Servers 필드로 이동하고 몇 가지 값을 입력해 봅니다. 
 Server Type은 Non DB로 둡니다. 
 그런 다음 Application Name에 입력합니다. 
 이것은 선택 사항이지만 나중에 보고서 사용을 분명하게 하는 데 도움이 될 수 있습니다. 
 이제 VMs가 있습니다. 
 100개가 있고, CPU 코어 2개와 8GB RAM을 갖추고 있다고 가정합니다. 
 그러면 하이퍼바이저 중에 선택하는 옵션이 있지만 Xen을 사용할 것입니다. 
 그리고 Caluclate TCO를 클릭합니다. 
 그러면 계산기는 3년의 시간 프레임에 대해 입력한 값을 볼 때, AWS로 이동하면 42%를 절감할 수 있다고 알려 줍니다. 
 또한 달러 금액의 절감 효과도 제공합니다. 
 아래로 스크롤하면 환경 세부 정보가 표시됩니다. 
 계산기가 입력한 값에 따라 인스턴스 유형을 선택하는 것에 주목합니다. 
 필요한 인스턴스 유형을 파악하는 것은 어려운 작업일 수 있지만, TCO 계산기를 사용하면 입력한 값과 설정을 기반으로 제안을 해 줍니다. 
 이제 상단으로 돌아가서 Change Input을 선택하면 초기 페이지로 이동합니다. 
 Advanced로 변경하여 어떤 옵션이 있는지 살펴보겠습니다. 
 이제 VM 사용과 최적화 방법을 추가하고 마지막으로 호스트를 추가할 수 있습니다. 
 최적화 방법에 대한 명확한 정보가 필요한 경우 매핑 기준을 설명하는 차트가 표시됩니다. 
 여기에는 해당 옵션이 일치하는 방식에 대한 설명이 나와 있습니다. 
 아래로 스크롤합니다. 
 계산기가 비용을 구분하는 차트를 생성한 것을 확인할 수 있습니다. 
 이는 비용 구분 내역을 그래픽으로 표시하는 데 유용한 도구가 될 수 있습니다. 
 마지막으로, 비용 구분 내역, 방법론, 가정 및 FAQ를 포함하는 전체 보고서를 다운로드할 수 있습니다. 
 또 다른 장점은 보고서를 Amazon S3에 저장하고 원하는 경우 다른 사람들과 공유할 수 있다는 것입니다. 
 TCO 계산기는 AWS 클라우드에 애플리케이션을 배포함으로써 실현할 수 있는 잠재적 절감 효과에 대한 지침을 제공하는 도구입니다. 
 이 도구는 대략적인 용도로만 사용되지만 실제로 달성할 수 있는 가치에 대해 공정한 평가를 제공할 수 있다는 점을 기억하세요. 
 좋습니다. 
 오늘 AWS 총 소유 비용(TCO) 계산기에 대해 간단하게 알아보았습니다. 
 앞서 언급했듯이, TCO 계산기는 다음과 같은 작업에 도움이 되는 도구입니다. 
· AWS를 사용할 때의 비용 절감을 추정
· 상세 보고서 세트를 제공 
 · 비즈니스 요구 사항에 가장 잘 맞게 가정 수정
 
 TCO 계산기에 대한 세부 정보 및 추가 리소스를 확인하려면 <http://aws.amazon.com/ko>에 방문하세요. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Anna Fox였습니다. 
 
 
 - AWS Support Plan 
 안녕하세요. 
 저는 Amazon Web Services 교육 및 자격증 팀의 Anna Fox입니다. 
 오늘은 AWS Support와 사용 가능한 Support 플랜에 대해 알아보겠습니다. 
 AWS는 고객의 성공을 지원하는 데 필요한 리소스를 제공하고자 합니다. 
 따라서 새로운 고객이든, 비즈니스 솔루션으로 AWS 서비스와 애플리케이션을 계속 채택하는 고객이든 상관없이 AWS를 통해 놀라운 작업을 수행할 수 있도록 돕고자 합니다. 
 AWS Support는 현재 또는 미래의 계획된 사례를 기반으로 도구 및 전문 지식의 고유한 조합을 제공합니다. 
 이 동영상에서는 AWS Support 및 AWS Support 플랜을 살펴봅니다. 
AWS Support는 고객의 성공을 돕기 위한 완전한 지원과 올바른 리소스를 제공하기 위해 개발되었습니다. 
 AWS를 실험하고 있는 사람들, 프로덕션에서 AWS를 사용하려는 사람들, 그리고 AWS를 비즈니스 크리티컬 리소스로 사용하는 고객을 포함하는 모든 고객을 지원하고자 합니다. 
 AWS Support는 고객의 요구 사항과 목표에 따라 제공되는 지원 유형을 변경할 수 있습니다. 
 고객은 AWS를 통해 확신을 가지고 계획, 배포 및 최적화할 수 있습니다. 
 AWS는 고객에게 지원이 필요한 경우 안내할 수 있는 도구와 리소스를 갖추고 있습니다. 
 사전 지침을 원하는 사용자가 있는 경우 해당 사용자의 1차 연락 담당자로 지정된 기술 계정 관리자가 있습니다. 
 기술 계정 지원 담당자(TAM)는 사용자가 솔루션을 계획, 배포 및 최적화할 때 계속 정보를 얻고 대비할 수 있도록 지침, 아키텍처 검토 및 지속적인 커뮤니케이션을 제공할 수 있습니다. 
 TAM은 고객의 대변자이자 AWS 내 전담 지원 창구입니다. 
 AWS 환경에서 성능과 내결함성을 개선하는 모범 사례를 따르고자 하는 사용자에게는 AWS Trusted Advisor가 있습니다. 
 AWS Trusted Advisor는 사용자 지정 클라우드 전문가와 비슷하지만, 실제로는 월별 지출을 줄이고 생산성을 개선할 수 있는 기회를 확인하는 온라인 리소스입니다. 
 다음으로 계정 지원이 필요한 경우, Support 컨시어지는 결제 및 계정 전문가로서 문제에 대한 빠르고 효율적인 분석과 지원을 제공하여 사용자가 비즈니스에 더 많은 시간을 쏟을 수 있도록 지원합니다. 
 컨시어지는 기술적이지 않은 결제 및 계정 수준의 모든 문의를 처리합니다. 
 이제 Support 플랜 옵션을 알아보겠습니다. 
 AWS는 고객이 확신을 가지고 계획, 배포 및 최적화할 수 있기를 바랍니다. 
 이에 따라 고객을 지원하기 위해 AWS는 도움을 줄 특정 플랜을 개발했습니다. 
 Basic Support 플랜, Developer Support 플랜, Business Support 플랜 및 Enterprise Support 플랜이 있습니다. 
 AWS 홈페이지로 이동하면 Support를 클릭해 봅니다. 
 그러면 AWS Support 페이지로 이동하게 됩니다. 
 여기서 각 플랜을 살펴보고 비교할 수 있습니다. 
 각 플랜을 더 자세히 보고 싶으면 탐색 창에서 해당 플랜을 클릭합니다. 
 그러면 Developer Support 플랜을 살펴보겠습니다. 
 여기에서는 이 플랜이 제공해야 할 사항과 추가 세부 정보 및 리소스에 대한 간략한 소개를 제공합니다. 
 각 플랜에 대해 이러한 모든 정보를 볼 수 있습니다. 
 플랜을 비교하려면 AWS Support 플랜 비교를 클릭합니다. 
 이는 플랜 간의 차이점을 평가하고 고유한 요구 사항에 가장 적합한 리소스를 확인하는 데 유용한 차트입니다. 
 여기에서 확인할 수 있듯이 플랜의 지원 항목과 추가 리소스가 달라집니다. 
 이 차트는 고객이 요구 사항에 가장 적합한 플랜과 필요한 리소스 수준을 결정하고 이해하는 것을 돕기 위해 마련되었습니다. 
좋습니다. 
 이제 마무리 단계에 접어들었습니다. 
 이 동영상에서는 AWS Support 및 고객의 요구 사항에 적합하도록 마련된 다양한 플랜 옵션에 대해 살펴보았습니다. 
 오늘 알아본 플랜은 다음과 같습니다. 
· Basic Support 플랜
· Developer Support 플랜
· Business Support 플랜
· Enterprise Support 플랜  

AWS Support 및 AWS Support 플랜에 대한 세부 정보 및 추가 리소스를 확인하려면 <http://aws.amazon.com/ko>에 방문하세요. 
  저는 Amazon Web Services 교육 및 자격증 팀의 Anna Fox였습니다. 
  

 

 

 


  - 보안그룹 및 NACL
  이제 AWS에서 애플리케이션을 위한 네트워킹을 보호하는 방법에 대해 설명해 보겠습니다. 
 이 경우에는 3티어 애플리케이션으로 단순화합니다. 
 이미 주요 네트워킹 요소들을 모두 추가했습니다. 
 그리고 인터넷 게이트웨이, 가상 프라이빗 게이트웨이 등의 게이트웨이도 갖추었습니다. 
 여러 가용 영역(AZ)에 걸쳐 분산된 서브넷(퍼블릭 및 프라이빗)도 갖추었습니다. 
 올바른 자산에 대해 올바른 서브넷 액세스 권한을 부여하도록 라우팅 테이블을 추가했습니다. 
 또한 보안 그룹에 대해 살펴보았으며 포트, 프로토콜 및 IP 범위에 따라 수신 트래픽을 정의하는 보안 그룹이 모든 인스턴스에 있다는 사실도 알아보았습니다. 
 보안 그룹의 기본 동작은 모든 인바운드 트래픽을 거부하는 것이므로 보안 그룹 간에 수신되는 모든 트래픽을 허용하도록 명시적 규칙을 추가해야 합니다. 
이것이 VPC 내부의 유일한 트래픽 조절 장치는 아닙니다. 
 서브넷은 IP 범위, 포트 및 프로토콜을 기반으로 필터링할 수도 있는 개별 서브넷에 적용되는 선택적 네트워크 액세스 통제 목록(NACL)을 가질 수 있습니다. 
 이 목록(NACL)은 불필요한 것으로 느껴지며 실제로도 네트워킹 요구가 많기 때문에 불필요합니다. 
 다만 NACL의 운영 방식은 보안 그룹의 운영 방식과는 다릅니다. 
 NACL은 상태비저장(stateless) 방식으로 운영되는 반면, 보안 그룹은 상태저장(stateful)방식으로 운영됩니다. 
다시 말해, 하나의 인스턴스에서 수신되며 보안 그룹 밖에서 허용되는 모든 패킷은 항상 반송 트래픽이 허용되며 심지어는 반송된 소스 인스턴스에서 통신을 허용하는 인바운드 트래픽에 대한 규칙이 없더라도 그러합니다. 
 반송 트래픽을 기억할 수 있다는 이러한 개념은 NACL에서 트래픽을 분리하는 것이 무엇인지를 나타내는 반면, NACL은 인바운드 역할 집합과 아웃바운드 역할 집합을 하나씩 갖습니다. 
 그리고 각 역할 집합은 왕복 프로세스의 일환으로 평가해야 합니다. 
 이제 한 가지 예를 들어 이것(NACL)이 어떤 방식으로 운영되는지 알아 보겠습니다. 
 이 특정 인스턴스에서 시작하는 하나의 패킷을 갖고 있으며 해당 패킷은 이 특정 인스턴스와 통신하려는 상황을 가정해 봅시다. 
 그렇다면 이 패킷은 인스턴스 1의 보안 그룹, 퍼블릭 서브넷 1의 NACL, 프라이빗 서브넷 1의 NACL 및 인스턴스 2 주변의 보안 그룹을 각각 통과하게 됩니다. 
 평가 규칙은 다음과 같습니다. 
 먼저 첫 번째 인스턴스 주변의 해당 보안 그룹을 종료하려는 패킷으로 시작합니다. 
 기본 규칙 집합은 모든 종료 트래픽을 허용하는 것입니다. 
 다만 이 규칙 집합은 변경되었을 수도 있기 때문에 해당 패킷이 올바르게 승인된 포트 프로토콜과 IP 범위를 표적으로 하는지 확인할 목적으로 패킷 검사가 실시됩니다. 
 본 시나리오에서는 패킷이 허용되기 때문에 해당 패킷은 홈 인스턴스를 종료할 수 있으며, 이 지점에서 퍼블릭 서브넷 1에 대한 서브넷 경계에 도달합니다. 
 여기는 여권 검사대(passport control)입니다. 
 패킷이 이 인스턴스를 종료할 수 있도록 허용됩니까? 지금 안전한 위치로 이동 중인가요? 그 표적을 검사합니다. 
 이것이 승인, 포트, 프로토콜, IP인지 확인합니다. 
 그렇습니다. 
 승인된 것입니다. 
 출국 수속 절차를 모두 통과하면 출국하여 서브넷을 떠날 수 있으며 다음 지역을 여행할 수 있게 되는데 여기서도 여권 검사대를 통과하게 됩니다. 
 이제 프라이빗 서브넷 1로 들어가는 중입니다. 
 명시적으로는 포트, 프로토콜, IP 주소를 나타내는 라인이 있어야 하는데 이것은 승인된 것인가요? 기본 동작들은 모두 허용하지만 또 다시 블록이 존재할 수 있습니다. 
 이 시나리오에서는 블록이 없고 패킷은 여권 검사대를 통과하여 해당 인스턴스에 도달합니다. 
 한 번 더 포트, 프로토콜 및 IP 주소가 필요합니다. 
 이는 상태 저장 검사에 속합니다. 
 다만 이 패킷이 실제로 요청을 받은 것인지 한 번 알아보겠습니다. 
 입장이 허용되었습니까? 명시적 규칙은 ‘모든 트래픽을 거부’하는 것입니다. 
 이것은 패킷이 수신되지 않을 경우, 사용자가 차단되는 여러 원인들 중 가장 가능성이 큰 원인에 해당됩니다. 
 이 규칙은 화이트리스트에 추가해야 합니다. 
 이 시나리오에서 해당 규칙은 화이트리스트에 추가되었습니다. 
 패킷은 인스턴스로 들어가 인스턴스 내에서 필요한 모든 작업을 수행합니다. 
 작업은 완료되고 파티는 끝나며 이제 복귀할 시간입니다. 
 응답 트래픽을 발신할 시간입니다. 
 이 경우, 파티를 떠나고 집 밖으로 나가면 도어맨이 나를 인지합니다. 
 상태 저장 방화벽이군요. 
 이 방화벽은 내가 홈(home)으로 복귀해도 되는지 여부를 확인하는 검사를 수행하지 않습니다. 
 다만 방화벽은 나를 건물 밖으로 내보내면서 “즐거운 하루 보내세요”라고 작별 인사를 건넵니다. 
 건물을 벗어나 서브넷 경계에 도착하면 여권 검사대가 나타납니다. 
 서브넷 경계에서는 사용자가 들어가도 되는지 여부를 신경 쓰지 않으며 반송 트래픽이 허용되는지 여부를 다시 확인하게 됩니다. 
 서브넷 경계는 반송 트래픽이 무엇인지를 인지하지 못합니다. 
 다만 포트, 프로토콜 및 IP만 인지할 뿐입니다. 
 이 경우에는 블록이 없습니다. 
 기본 동작들은 모두 허용합니다. 
 트래픽은 홈 서브넷의 여권 검사대를 떠납니다. 
 다시 한 번 포트, 프로토콜 및 IP 주소가 필요합니다. 
 허용됩니까? 예, 허용됩니다. 
 이것은 상태 비저장에 해당됩니다. 
 메모리는 없습니다. 
 그런 다음, 발신 인스턴스의 보안 그룹에 도착하며 내 홈 도어맨은 내가 맨 처음 문 밖으로 나갈 수 있도록 허용된 사실을 인식했습니다. 
 도어맨은 홈으로 들어오면서 변경된 부분은 없는지 확인하는 검사를 수행하지 않습니다. 
 홈 복귀가 허용되며 해당 패킷은 프로세스를 완료합니다. 
 이 지점에서 그러한 차이들을 관리했습니다. 
 이 때쯤 되면 본 동영상을 시청 중인 수강생들 중 상당수는 이렇게 말할 것 같습니다. 
 “오 이런. . . 
 여권 검사대에서 나의 출입 및 출국 수속 검사가 완료되었기 때문에 내 서브넷 네트워크 액세스 통제 목록(NACL)의 보안 기능들은 더 강력한 기능인 것 같습니다. 
 그러한 보안 기능들만 사용해야겠네요. 
” 이를 머리로 이해하기에 앞서 인스턴스 1에서 인스턴스 3까지 상호 통신이 이루어질 때 이에 포함되는 서브넷 경계는 몇 개인지를 질문해볼 필요가 있습니다. 
 답은 0입니다. 
 동일한 서브넷에 있다면 처리된 NACL은 없는 것입니다. 
 NACL은 오로지 서브넷 경계를 통과하는 중 처리되며 그것(NACL)이 가용 영역(AZ)에 걸쳐 존재하는지 여부는 문제가 되지 않습니다. 
 나는 서브넷 내에 있을 때에는 NACL을 처리하지 않습니다. 
 다만 나는 보안 그룹들을 항상 처리하며 모든 인스턴스는 보안 그룹들을 갖습니다. 
 그래서 규칙 작성 시 기본 동작은 모든 규칙들을 하나의 보안 그룹에 입력하고 NACL을 사용해 지원을 2배로 늘리거나 해당 보안 그룹 수준에서 선언된 동작을 무시하도록 설정해야 합니다. 
 이렇게 하면 Amazon VPC 내부의 자산을 보호하는 데 필요한 여러 가지 컨트롤 중 한 가지를 얻게 됩니다. 
 
 - 보안 그룹
 AWS 내에서 애플리케이션을 보호하는 방법에 대해 계속 설명해 보겠습니다. 
 앞서 우리는 퍼블릭 서브넷과 프라이빗 서브넷을 이용해 VPC를 설정하는 방법을 살펴보았으며, 라우팅 테이블을 사용해 게이트웨이에 대한 액세스를 제어하는 작업에 대해서도 알아보았습니다. 
 이제는 애플리케이션 내에서 개별 인스턴스에 대한 액세스로 이동해보려고 합니다. 
 애플리케이션 인스턴스에 대해서만 통화하고 해당 애플리케이션은 데이터베이스에 대해 통화할 수 있도록 전면 웹 애플리케이션을 포함한 권한을 제어할 방법은 무엇입니까? 이러한 제어는 보안 그룹(security groups)이라 불리는 AWS 엔진을 통해 이루어집니다. 
 VPC 내부의 모든 인스턴스는 이미 그 주위에 보안 그룹을 하나씩 갖고 있습니다. 
 이것은 Amazon EC2 기술의 핵심에 속합니다. 
 보안 그룹은 기본적으로 모든 수신 트래픽을 차단하는 개별 인스턴스 주변의 방화벽으로 간주할 수 있습니다. 
 이는 매우 지루한 인스턴스를 야기합니다. 
 개발자들은 방화벽을 사용하고 트래픽의 특정 소스를 승인하는 명시적 규칙들을 방화벽과 보안 그룹 내부에 작성합니다. 
 소스는 IP 주소, 프로토콜 및 포트로 정의됩니다(IP 범위는 실제로 의미하는 대상에 해당됩니다). 
 내 애플리케이션 서버가 프런트엔드 웹 서버의 트래픽만 허용하고 싶다면 해당 범위, 포트 및 웹 서버 자체의 프로토콜에서 발신되는 트래픽을 허용할 보안 그룹의 구멍을 열기만 하면 됩니다. 
 데이터베이스는 그 주위에 자체 보안 그룹을 갖게 되며 애플리케이션 서버에서 발신된 트래픽만 허용합니다. 
 사용자의 프런트엔드 웹 서버도 하나의 보안 그룹을 가지며, 웹 서버와 통신하기 위해 외부 웹 랜드에서 수신되는 000의 트래픽을 승인할 유일한 서버에 해당됩니다. 
 이를 일컬어 스택 보안 그룹(stacked security groups)이라 합니다. 
 외부에서 발신되는 모든 트래픽은 라우팅 테이블로 인해 그리고 보안 그룹들이 000의 트래픽을 허용하지 않는 이유로 애플리케이션 서버 또는 데이터베이스와 통신할 수 없습니다. 
 이 보안 그룹은 웹 서버의 트래픽 정보만 허용합니다. 
 외부의 어떤 자가 연결할 수 있는 유일한 것은 프런트엔드 웹 서버이며 이는 보안 그룹이 허용하는 유일한 서버이기 때문에 거기서만 애플리케이션에 연결할 수 있으며 애플리케이션에서 데이터베이스로 연결할 수 있습니다. 
 이는 알 수 없는 결함이 발생할 때 하나 또는 둘 이상의 보호 계층을 형성합니다. 
 예를 들어, 어떤 이유로 인해 사용 중인 애플리케이션 개발자 중 한 명이 웹 랜드에서 상황을 지켜보고 있던 악의적 해커가 악용할 수 있는 어떤 결함 내지는 열린 요소를 방치하는 것과 같은 취약성이 프런트엔드 웹 서버에 존재한다면 무슨 일이 벌어질까요? 여기에서 보안 그룹은 포트 443 또는 포트 80을 통해서만 000에서 열리기 때문에 문제의 해커는 바로 이 인스턴스에 연결하여 가상의 결함을 악용하고 해당 인스턴스에 대한 루트 액세스 권한을 확보할 수 있었습니다. 
 실현 가능성이 희박한 이 시나리오에서 문제의 해커는 데이터베이스와 통신하여 데이터베이스에 침입하려는 시도를 하기 위한 액세스 권한을 갖고 있나요? 이 질문에 대한 답은 ‘아니요’입니다. 
 라우팅 테이블이 어떤 패킷의 경로가 존재할 수 있도록 허용하더라도 데이터베이스 주위의 해당 보안 그룹은 웹 서버에서 해당 데이터베이스와 직접 통신하려는 시도를 거부하게 됩니다. 
 이는 간단하게 허용되지는 않습니다. 
 그 대신 해커가 할 수 있는 유일한 일은 애플리케이션 포트를 통해서만 애플리케이션 서버와 통신하는 것입니다. 
 이러한 작업은 웹 애플리케이션 자체와 통신하는 것만으로 매년 완료했을 수도 있습니다. 
 한편 이 해커는 지금 웹 서버에 있기 때문에 해당 운영 체제에 존재하는 Amazon Inspector 또는 그 외 어떤 프로세스에서 누군가 루트에 있음을 인식하고 있으며 예상되는 숫자는 0이고 실제 숫자는 0보다 크며 이 지점에서 인스턴스는 위반 상태에 있음을 자체 보고하든 상관없이 일정한 수준의 작업 보호 기능을 설치했을 가능성이 매우 높습니다. 
 인스턴스는 손상됩니다. 
 인스턴스가 할 수 있는 것은 무엇입니까? 인스턴스는 Amazon S3로 로그를 덤프할 수 있으며, 시스템 관리자들에게 알림을 전송한 후 마지막으로 자체 종료됩니다. 
 인스턴스는 더 이상 존재하지 않으며 악의적 해커는 불과 수초 이내에 이러한 가상의 취약성을 역이용합니다. 
 이것이 바로 계층 내 보안입니다. 
 이러한 보안에서는 모든 개별 요소가 손상될 것을 예상하기 때문에 계층들이 겹겹이 위치하며 각 계층은 공격을 차단할 수 있습니다. 
 지금 어떤 계층을 사용하고 있습니까? 모든 계층을 사용하고 있습니다. 
 보안 그룹들은 모든 인스턴스 주위에 존재하는데 이는 개발자들이 하게 될 첫 번째 작업이 모든 포트, 모든 프로토콜 및 모든 IP 주소에 대하여 보안 그룹을 여는 것임을 의미합니다. 
 저는 이러한 작업이 이루어지는 것을 목격했습니다. 
 그것은 보안 그룹의 모든 목적을 무효화합니다. 
 이는 집 한 채를 장만하는 것과 같은 셈이 되는데, 여러분이 완료한 첫 번째 작업은 경첩에 달린 문을 뜯어낸 후 “수많은 쿠키들아! 들어와”라는 메시지가 적힌 큰 간판 1개를 부착하는 것이었습니다. 
 모든 쿠키가 해당 지점에서 도난당하는 것은 그 누구의 잘못도 아닙니다. 
 충분한 보안 점검의 일환에서 보안 관리자들은 보안 그룹을 정기적으로 검사하여 퍼블릭 인스턴스만 보안 그룹에서 000, 열린 포트를 갖고 있으며 내부 프라이빗 보안 그룹이 단지 예상되는 인스턴스로부터 트래픽을 허용하도록 적절하게 계층화되어 있는지 확인하기 위해 보안 그룹들을 수시로 점검해야 합니다. 
 보안 그룹들은 애플리케이션 내 가장 소중한 자산들을 보호하는 데 도움을 주기 위해 사용되는 또 다른 키 도구가 됩니다. 

 

 


 
 - 공동 책임 모델
 여러분이 사용하는 애플리케이션이 AWS에서 실행되고 있을 때, 어떤 지점에서 누군가는 애플리케이션을 보호할 궁극적 책임을 담당해야 합니다. 
 그 책임은 ‘A) 고객’ 여러분에게 있을까요? 아니면 ‘B) AWS’에 있을까요? 정답은 ‘C) 고객과 AWS 모두 해당’입니다. 
 고객 여러분과 AWS는 모두 전체 애플리케이션을 보호하기 위해 함께 협력해야 합니다. 
 좋은 보안 관리자라면 여러분이 동일한 개체를 보호하는 2개의 다른 업체를 보유할 수 없으며, 실제로 보안(security)을 갖추고 있지는 않고 다만 제안(suggestions)이 있다는 사실을 여러분에게 알려줄 것입니다. 
 이에 동의하면 AWS에서 우리는 공동 책임 모델(Shared Responsibility Model)이라 불리는 모델을 갖습니다. 
 우리는 애플리케이션 스택을 전체적으로 관찰하며 이를 여러 개의 부분으로 분할합니다. 
 그러한 부분들 중 일부에 대해서는 AWS가 100% 책임을 집니다. 
 나머지 부분들에 대해서는 고객 여러분이 100% 책임을 집니다. 
 그러한 분할의 위치를 파악하는 것은 고객 여러분과 AWS의 상호 작용 중 일부에 속합니다. 
 이제 스택이 어떻게 작동하는지를 단순한 관점으로 살펴보겠습니다. 
 먼저 물리적 계층에서 스택의 맨 아랫쪽부터 시작합니다. 
 이것은 철과 콘크리트입니다. 
 이것은 철조망 울타리입니다. 
 어떤 이는 주차장을 관리해야 합니다. 
 어떤 이는 물리적 디바이스를 관리해야 합니다. 
 이것이 AWS입니다. 
 저희 AWS는 데이터 센터 둘러보기를 제공하지 않습니다. 
 또한 물리적 측면을 보호하는 방법의 일환에서 어떠한 유형의 액세스 권한도 부여하지 않습니다. 
 물리적 계층의 맨 위에서 우리는 AWS 시스템의 보안을 허용하도록 설계된 독점 네트워킹 프로토콜인 AWS 네트워크를 실행하기 때문에 가상 프라이빗 클라우드(Amazon VPC) 같은 요소들은 규모와 속도에 따라 작동할 수 있으며 모든 요소들은 트래픽을 보호하도록 설계되어 있습니다. 
 AWS는 트래픽을 어떻게 보호할까요? 이것은 AWS 보안의 일부로서 알려줄 수 없습니다. 
 만약 이 점에 대해 언급한다면 많은 부분을 알려주지 않는 셈이 됩니다. 
 저는 이러한 점을 이해하고 있습니다. 
 AWS에서 하는 일을 여러분에게 정확히 알려줄 수는 없지만 감사 기관에 대해서는 이를 매우 구체적으로 설명했습니다. 
 AWS.amazon.com/compliance를 방문하면 네트워크 스택 또는 물리적 요소를 이미 검토했거나 이들을 정기적으로 검토하는 수많은 타사 감사 활동을 볼 수 있습니다. 
 이 모든 감사에서는 네트워크가 안전하다고 할 때 Amazon이 아닌 누군가 - 감사가 어떻게 완료되었는지를 설명하는 신뢰할만한 자 - 가 검토한 매우 엄밀한 정보를 제공합니다. 
 네트워크의 맨 위에는 하이퍼바이저(hypervisor)가 있습니다. 
 지금은 AWS 하이퍼바이저가 Xen 기반의 하이퍼바이저를 사용한다는 사실을 밝히고 있습니다. 
 그렇지만 보안을 구현하고 확장성을 제공하며 어떤 데이터의 유출에 대해서도 걱정하지 않고 수백만 명의 동시 고객들을 실행할 수 있도록 하이퍼바이저를 특별히 많이 변경했습니다. 
 하이퍼바이저의 맨 위에서 EC2를 실행할 경우, Amazon Elastic Compute Cloud(Amazon EC2)의 하이퍼바이저로부터 게스트 운영 체제를 분리하는 마법의 구분선이 존재하며 이 경우에는 해당 운영 체제를 선택합니다. 
 사용자는 각자의 기호에 따라 Linux 또는 Windows를 선택하며 어떤 애플리케이션이 실행되는지를 선택합니다. 
 이 구분선 위에서 AWS는 0의 가시성(zero visibility)을 갖습니다. 
 운영 체제에서 어떤 일이 발생하는지를 확인하기 위해 할 수 있는 일은 없습니다. 
 AWS는 여러분이 사용하는 애플리케이션에 대해서는 아는 바가 없기 때문에 사용자 데이터에 대해서도 알지 못합니다. 
 이러한 데이터는 액세스 키 비밀 키 조합과 암호화 방법을 통해 전적으로 보호되는 콘텐츠에 해당됩니다. 
 원한다면 그 내용을 읽을 수도 없습니다. 
 사실, 클라우드에 대한 근거없는 통념 중 한 가지는 AWS가 예전에 과도한 마케팅 활동을 벌였던 수많은 이메일 서비스 업체들처럼 사용자의 정보를 물색하고 있다는 것입니다. 
 물론 Amazon.com에서는 그러한 정보를 좋아할만한 일부 마케팅 관리자가 있겠지만 그러한 정보에 접근할 권한이 있는 관리자라도 실제로는 AWS의 아키텍처 설계 방식 때문에 해당 정보에 접근할 수 없습니다. 
 해당 정보는 읽는 것 자체가 불가능합니다. 
 따라서 AWS는 구분선 아래에 있는 모든 것에 대해 100% 책임이 있으며, 구분선 위에 있는 모든 것에 대해서는 고객 여러분이 100% 책임을 지게 됩니다. 
 여러분이 각자의 역할을 담당하면 AWS는 자사의 역할을 수행합니다. 
 보안 애플리케이션 환경은 바로 이러한 과정을 통해 구현됩니다. 
 
 - IAM 사용자/그룹/역할
 AWS에서 권한이 작동하는 방식을 이해한다면 곧 IAM(Identity and Access Management)이 내부에서 어떻게 작동하는지를 이해하는 셈이 됩니다. 
 IAM에서는 영어가 문제가 됩니다. 
 명백한 것들을 의미하는 단어들이 있긴 하지만 그러한 단어들에 숨겨진 의미를 살펴보면 그러한 단어들을 사용하는 사람과 문맥에 따라 실제로는 완전히 다른 것을 의미할 수 있습니다. 
 사용자(user), 그룹(group) 및 역할(role)과 같은 단어들 즉, 매우 친숙한 느낌을 주면서 각 단어가 의미하는 것이라고 생각되는 것을 어느 정도 의미하는 단어들, 다만 IAM의 관점에서 볼 때 매우 특정한 의미를 담을 수 있는 단어들부터 차근차근 살펴보겠습니다. 
 먼저 사용자(user)의 개념부터 알아보겠습니다. 
 사용자란 무엇입니까? AWS IAM의 경우, 여기서 우리가 논하는 것(사용자)은 이름이 지정된 영구적 운영자를 가리킵니다. 
 그것은 인간일 수도 있고 기계(컴퓨터)일 수도 있습니다. 
 사용자가 무엇인지는 중요하지 않습니다. 
 사용자란 그 개념상 내 자격 증명이 영구적이며 이름과 암호, 액세스 키, 비밀 키 조합 등 무엇이든 간에 강제 순환이 있을 때까지 내 자격 증명이 이름이 지정된 사용자를 포함한 상태로 유지된다는 것을 의미합니다. 
 이것은 시스템 내에서 이름이 지정된 사용자들에 대한 내 인증 방법(authentication method)입니다. 
 그렇다면 그룹(group)이란 무엇입니까? 이 시점에서 이 그룹이란 용어는 분명한 의미를 담고 있어야 합니다. 
 그룹(group)이란 사용자의 모음을 의미합니다. 
 그룹은 Blaine을 포함한 많은 사용자들을 포함할 수 있으며 사용자들은 많은 그룹에 속할 수 있습니다. 
 이 개념이 이해하기가 어렵다고 말할 분도 계시겠군요. 
 역할(role)이란 무엇입니까? AWS IAM에서 역할은 사용자의 권한이 아니기 때문에 여기서는 (역할을) IAM으로 이해하는 것이 중요합니다. 
 역할(role)이란 인증 방법을 의미합니다. 
 사용자(user)란 운영자를 의미하며 인간이거나 기계(컴퓨터)일 수 있습니다. 
 여기서 중요한 점은 이것이 자격 증명의 영구적(permanent) 집합이라는 것입니다. 
 역할(role)은 운영자를 의미합니다. 
 그것은 인간일 수도 있고 기계(컴퓨터)일 수도 있습니다. 
 여기서 중요한 점은 역할을 포함한 자격 증명이 일시적(temporary)이란 것입니다. 
 어떤 경우든 간에 여기서 우리는 사용자 및 운영자를 위한 인증 방법(authentication method)을 확인하고 있습니다. 
 AWS에서는 모든 것이 API입니다. 
 이것이 중요합니다. 
 이는 또한 API를 실행하기 위해 무엇보다도 먼저 인증(authenticate)을 한 후에 권한을 부여(authorize)해야 한다는 것을 의미합니다. 
 역할은 권한(permissions)을 의미하지 않습니다. 
 다만 이것은 인증(authentication)을 의미할 뿐입니다. 
 권한은 어떤 경우든 간에 정책 문서(policy document)로 알려진 별도의 개체에서 발생합니다. 
 정책 문서는 JSON 문서입니다. 
 이 문서는 영구적 이름이 지정된 사용자 또는 사용자 그룹과 직접 연결되며, 혹은 역할과 직접 연결할 수 있습니다. 
 정책 문서는 지금 제가 화이트리스트에 추가하고 있거나 허용하는 특정 API를 나열하거나 혹은 API들의 와일드카드 그룹을 나열하는데, 그렇다면 어떤 리소스에 대해 나열하는 것일까요? 그것(정책 문서)은 계정 내 리소스에 대한 것일까요? 아니면 특정 부분 집합에 대한 것일까요? 몇몇 조건들이 있습니까? 홈 네트워크에 있다면 개인적으로만 정책 문서를 허용할 필요가 있을까요? VPN에 전화를 건다면? 혹은 어떤 위치에서든 정책 문서를 허용할까요? 하루 중 특정 시기에(정책 문서를 허용할까요)? 어쩌면 오로지 제반 함수를 수행할 수 있는 운영자들이 있을지도 모릅니다. 
 이 모든 운영자들은 정책 문서의 일부 요소가 됩니다. 
 이제 운영자들이 연결된 상태에서 하나의 API가 어떤 프로세스를 거치는 과정을 설명해 보겠습니다. 
 예를 들면, 어떤 운영자가 S3 버킷에 하나의 개체를 입력하려는 상황을 생각해 봅시다. 
 이것은 API 호출을 의미하며, 이러한 호출을 하기 위해 운영자들은 API를 실행하고 버킷 등의 개체를 S3에 입력합니다. 
 그리고 운영자들은 콘솔에 로그인하기 위해 사용하는 액세스 키, 비밀 키 또는 사용자 이름 및 암호 등 일련의 자격 증명 집합을 제공합니다. 
 이 모든 것들은 API 실행문을 나타냅니다. 
 이 실행문은 AWS API 엔진에 전달됩니다. 
 IAM 엔진이 수행하는 첫 번째 작업은 그러한 자격 증명, 사용자 이름 및 암호, 액세스 키, 비밀 키 등을 확인하고 이들 항목이 능동적 권한 부여 자격 증명인지를 검증하며, 이것이 영구 운영자(여기서는 Blaine)인지 혹은 어쩌면 프로젝트 17의 개발자 역할에 해당되는지 아니면 전체 사용자들의 모음을 포함할 수 있는 관리자 그룹에 해당되는지를 검증하는 것입니다. 
 어떤 경우든 간에 그러한 자격 증명들은 승인되며, 우리는 사용자 본인이 스스로 주장하는 운영자에 해당된다는 사실에 동의합니다. 
 그런 다음, 해당 운영자(예: 사용자, 사용자 그룹 또는 역할)와 관련된 정책 문서를 가져와 모든 정책 문서들을 한 눈에 평가합니다. 
 이제 여러분이 수행하는 작업 - 이 경우에는 S3에 개체를 입력 - 에 대한 권한이 해당 정책 문서를 통해 부여되는지를 알아보겠습니다. 
 권한이 부여된다면 API를 실행할 수 있게 됩니다. 
 이제 정책 문서는 명시적 거부(explicit deny)를 포함할 수도 있습니다. 
 이는 허용 실행문(allows statement)을 무시합니다. 
 허용 실행문이 없다면 암묵적 거부(implicit deny)가 있는 것입니다. 
 적어도 이러한 일이 발생하려면 하나의 함수를 화이트리스트에 추가해야 합니다. 
 다만 블랙리스트나 명시적 거부가 있을 경우에는 허용 실행문이 있는지 여부가 문제가 되지 않습니다. 
 그것은 몇몇 작업들이 발생하는 것을 영구적으로 방지해야 할 경우에 사용할 수 있습니다. 
 예를 들면, 어떤 생산 환경에서 자산을 중단하거나 종료할 수 있는 누군가가 필요하지 않은 상황을 생각해볼 수 있습니다. 
 EC2 중단을 거부하는 하나의 정책 문서를 작성할 수 있으며, EC2는 리소스 생산에 대비하여 종료되며 하나의 인스턴스를 사용하지 않도록 허용된 유일한 자에 해당하는 권한이 있는 시스템 관리자를 제외하면 모든 그룹, 모든 사용자 및 모든 역할에 그것(정책 문서)을 연결합니다. 
 따라서 생산 환경에서 우연히 자신을 발견해 인스턴스를 종료하기 시작한 개발자가 있다면 거부 정책은 삭제 실행문(delete statement)을 거부하고 종료 실행문(terminate statement)의 실행을 거부할 것이기 때문에 아무 것도 종료되지 않습니다. 
 이 사람이 스스로 개발자라고 주장하는 자에 해당된다는 사실에 동의하더라도 말입니다. 
 인증, 권한 부여. 
 이것은 추가적인 문제를 해결하며 일련의 자격 증명 집합에 해당됩니다. 
 예를 들면, 저의 개인 계정에서 어떤 이유 때문에 내 암호 관리를 소홀히 했다고 가정해 봅시다. 
 어쩌면 제 아이가 내 노트북을 사용하도록 방치했는데 그 녀석이 노트북에 어떤 바이러스를 다운로드했고 누군가 키 자동 기록기(key logger)를 획득해 내 사용자 이름과 암호를 가로챌 수도 있겠죠 - 설상가상으로 저는 멀티 팩터 인증을 사용하지 않습니다. 
 (제 시스템에서는 이 2가지 경우가 발생한 적이 없지만 실제로 발생한다고 가정해 보겠습니다.) 이러한 경우, 어디선가 어떤 악의적 해커가 내 관리 사용자 이름과 암호를 가로채 영구적 사용자와 연결된 모든 정책 문서에 접근할 권한을 확보합니다. 
 왕국의 열쇠(권한을 획득하기 위해 필요한 개인 정보)를 손에 넣었다는 사실을 확인한 이 해커는 “가지고 있는 비트코인을 다 내놓는 게 신상에 좋을 걸... 
 아니면 자산을 곧 삭제해 버릴 거야”라는 악의적인 내용의 인질 협박 메시지를 회사에 전달합니다. 
 그리고 실제로 비트코인이 위치한 지점을 검증하기 위해 회사의 일부 자산을 삭제합니다. 
 이에 회사 측은 당황합니다. 
 이제 무엇을 해야 할까요? 여러분이 지금 사용자와 연결된 정책 문서를 사용 중이며 루트 수준의 자격 증명을 사용하고 있지 않다면 이 시점에서 - 어떤 계정이 손상되었는지를 알지 못하는 - 보안 관리자는 단일한 작업의 모든 사용자, 그룹 및 역할에서 모든 정책 문서를 제거하는 단일 API 실행문을 실행할 수 있습니다. 
 회사의 비트코인을 가로채지 못한 범인(해커)은 이제 비트코인이 필요한 이유로 더 많은 자산을 삭제하기 위해 비트코인이 위치한 지점을 검증하려고 합니다. 
 API 작업이 해결되는 과정은 다음과 같습니다. 
 범인은 Blaine의 자격 증명을 이용해 API를 제출하면서 S3에서 개체를 삭제한다고 말합니다. 
 해당 API는 IAM 엔진을 통해 평가됩니다. 
 IAM 엔진은 이 API가 Blaine이라는 사실에 동의합니다. 
 이것은 올바른 사용자 이름이자 암호입니다. 
 그런 다음, IAM 엔진은 해당 계정에서 삭제된 해당 정책 문서들(연결된 상태)을 검사합니다. 
 이들 문서는 Blaine에 더 이상 연결되어 있지 않은데, 이는 Blaine이 어떤 문서든 삭제할 수 없으며 그나마 애쓴 게 가상하다는 것을 의미합니다. 
 그건 그렇고 AWS CloudTrail에서는 모든 API 작업이 CloudTrail에 ‘성공(successful)’ 또는 ‘거부됨(declined)’으로 기록되기 때문에 여러분이 시도한 작업을 기록할 것입니다. 
 인증 및 권한 부여. 
 이러한 차이를 이해한다면 AWS에서 여러분의 작업에 일부 중요한 권한과 보안을 추가할 수 있습니다. 
 
 - 데이터 암호화
 암호화를 논할 경우, 암호화와 AWS에 관해 알아야 할 사항들이 많이 있습니다. 
 이제 고민할 필요가 없는 것들 중 한 가지는 바로 암호화 프로세스에서 사용되는 알고리즘입니다. 
 그렇다고 이것이 중요하지 않다는 것은 아닙니다. 
 AWS는 사실 자사의 알고리즘에 대해 큰 자부심을 갖고 있습니다. 
 AWS는 AAS 256 알고리즘을 사용하며 다만 이 알고리즘이 작동하는 방식을 이해하려면 암호학 박사 쯤은 되어야 할 겁니다. 
 이는 지금 당장 알 수 있는 것보다 더 많은 시간이 걸립니다. 
 해당 엔진이 내부에서 작동하는 방식 그리고 암호화 키를 관리하는 방법을 각각 이해할 필요가 있습니다. 
 이제 몇 가지 기본 원리를 살펴보겠습니다. 
 데이터 키인 개인 암호 키를 생성하는 엔진이 하나 있다고 생각해 봅시다. 
 이 키는 데이터 객체에 적용되는 모든 암호 정보를 포함합니다. 
 해당 데이터 객체는 사진이나 동영상 혹은 사용자가 보호하려는 개별 문서일 수도 있습니다. 
 개별 개체에 대하여 암호 데이터를 적용하면 결국 암호화된 데이터를 얻게 됩니다. 
 이렇게 암호화된 데이터는 스토리지에 보관됩니다. 
 Amazon S3 또는 Amazon EBS든 어떤 것이든 간에 이는 문제가 되지 않습니다. 
 원래의 암호화 키가 없다면 해당 데이터를 읽을 방법도 없는 셈이 됩니다. 
 여기서 진짜 문제가 되는 것은 그 키를 관리하는 것입니다. 
 저희는 그 키를 잃고 싶지 않으며 키를 보호하고 싶어 합니다. 
 또 다른 고려 사항이 여기에 있습니다. 
 어떤 이유로든 간에 데이터 키가 손상될 경우 해당 키를 가진 사람이 모든 것을 읽을 수 있기 때문에 내 환경에 저장하는 모든 개체에 대해 똑같은 데이터 키를 사용하고 싶지는 않습니다. 
 그 대신, 내 엔진은 내 시스템으로 들어가는 모든 개체에 대해 고유한 하나의 키를 만들 것입니다. 
 50개 또는 100개의 문서가 있다면 큰 문제가 있는 것으로 생각되지는 않겠지만 여러분이 현재 보관 중인 수백만 개 또는 수십억 개의 개별 개체가 존재할 수 있는 소셜 미디어 사이트를 운영하고 있다고 상상해 보십시오. 
 이러한 경우, 그러한 모든 키를 관리해야 한다는 것은 주어진 시나리오에서 까다로운 부분에 해당됩니다. 
 AWS는 이러한 관리를 어떻게 수행할까요? AWS는 실제로 부동산에서 교훈을 얻고 있습니다. 
 여러분은 부동산 중개인이며 직업상 여러분이 사는 동네에서 수만 채 혹은 수십만 채의 주택에 접근할 권한을 확보할 수 있다고 상상해 보십시오. 
 그러한 모든 키를 포함한 키 체인을 보관하는 대신, 키를 주택의 현관문에 테이프로 붙일 것을 모든 집 주인에게 부탁하면 됩니다. 
 이것은 AWS가 수행하는 작업이라는 점만 제외하면 언뜻 보기에 그다지 안전한 것 같지는 않을 것입니다. 
 비유하자면 AWS는 해당 키를 받아 키를 집 현관문의 사서함에 테이프로 부착하며, 부동산 중개인이 갖고 있는 것은 사서함에 부착된 이 키 밖에 없는 셈이죠. 
 부동산 중개인들은 마스터 키를 갖고 있으며 중개인은 이 키를 사서함에 부착하는 한, 이 키를 획득해 집 현관문을 열 수 있습니다. 
 AWS는 바로 이와 같은 일을 하고 있는 것입니다. 
 여러분은 이 데이터 객체에서만 사용되는 개인 데이터 키를 갖고 있습니다. 
 AWS는 데이터 키를 확보하고 마스터 키를 갖고 있으며 원래의 데이터 키와 대조하여 마스터 키를 적용함으로써 암호화된 버전의 데이터 키를 얻을 수 있습니다. 
 그런 다음, 새로 암호화된 버전의 해당 개체는 암호화된 개체에 가상으로 부착되며 그 개체는 결국 스토리지에 보관됩니다. 
 이제 원래의 데이터 키를 보관하는 것에 대해서는 고민할 필요가 없습니다. 
 사실 개인적으로 이 키를 잊어버릴 수 있습니다. 
 암호화된 버전의 키를 획득했기 때문에 원래의 키는 사라진 것입니다. 
 마스터 키를 분실하지 않는 한, 마스터 키를 사용하면 암호화된 버전의 키를 해독하여 평문 버전의 키를 얻을 수 있으며, 이 키를 사용하면 무엇보다도 평문 데이터 객체로 복원할 수 있게 됩니다. 
 마스터 키를 사용해 수행하는 작업을 제외하면 문제가 해결됩니까? 이제 문제가 되는 것은 바로 마스터 키 관리입니다. 
 똑같은 솔루션을 사용해 보겠습니다. 
 또 다른 마스터 키를 받아 이 키를 암호화한 후 스토리지에 따로 보관합니다. 
 이제 마스터 키를 암호화해야겠군요. 
 마스터 키를 암호화는 게 좋겠습니다! 암호 기법에 대해 논할 때 여러분이 결국 얻게 되는 것은 문제의 원인입니다. 
 고민해야 할 것은 마스터 키 관리입니다. 
 사람들은 이 문제에 대한 솔루션을 수십 년 동안 궁리해 왔습니다. 
 AWS는 암호 기법 적용 시 도움을 주기 위해 활용할 수 있는 솔루션들을 출시하고 있습니다. 
 저희는 클라우드 HSM(Hardware Security Module) 같은 하드웨어 솔루션 또는 서비스(예: KMS) 및 키 관리 시스템 등에 필요한 솔루션들을 제공합니다. 
 이러한 솔루션들은 AWS에서 기본적인 마스터 키 관리 문제를 해결할 방법들을 제공할 수 있습니다. 
 
 - EBS 볼륨 암호화 
 Amazon EBS(Elastic Block Store) 볼륨 암호화는 중요 데이터를 보호하는 데 도움을 주기 위해 AWS가 제공하는 기본 암호화 서비스 중 하나에 속하며, AWS KMS(Key Management Service)와 연동합니다. 
 EBS 볼륨 암호화를 사용할 경우, 모든 암호화 알고리즘이 서버 측에서 발생하여 암호화 프로세스에서 운영 체제에 필요한 여유 공간을 확보한다는 점이 돋보입니다. 
 EBS 볼륨 암호화의 운영 방식은 다음과 같습니다. 
 EBS 볼륨이 서버측 암호화를 요청할 경우, KMS는 각 EBS 볼륨에 대한 고유 데이터 키를 프로비저닝합니다. 
 AWS에서는 그러한 고유 데이터 키를 재사용하지 않습니다. 
 즉, 암호화가 필요한 EBS 볼륨의 수가 몇 개이든 간에 해당 시스템 밖에서 1개의 키가 손상될 경우에는 데이터가 손실될 가능성이 없습니다. 
 이 EBS 볼륨이 분리되어 별개의 EC2 인스턴스에 다시 연결될 경우에도 문제가 되지 않습니다. 
 이 볼륨을 몇 번 연결하든 이 볼륨을 제어하는 모든 EC2 인스턴스가 암호 정보에 대한 액세스 권한을 항상 갖는지 확인할 필요가 있습니다. 
 이 볼륨은 EC2 인스턴스에 연결될 때 암호화된 버전의 암호화 키를 EC2 인스턴스로 전달합니다. 
 그런 다음, 이 인스턴스는 해당 키를 해독해야 합니다. 
 EC2 인스턴스는 KMS 서비스에 이 키를 복호화할 것을 요청하며 암호화된 버전의 키를 KMS로 전달합니다. 
 KMS는 키를 복호화하는 데 사용된 마스터 키를 보관합니다. 
 이 마스터 키는 KMS 시스템을 떠나지 않습니다. 
 KMS는 권한을 사용해 마스터에 붙어 다니면서 개별 데이터 키를 복호화하며 이 키를 EC2 인스턴스로 다시 보냅니다. 
 이제 EC2 인스턴스는 복호화된 버전의 키에 대한 액세스 권한을 갖게 되며 EC2 인스턴스 및 EBS 볼륨을 드나드는 모든 데이터는 이러한 식으로 암호화 및 보관됩니다. 
 어떤 지점에서든 EC2 인스턴스는 암호화된 키 및 평문 버전의 암호화 키를 시스템 내 어느 곳에서든지 작성합니다. 
 암호화 키를 메모리에 보관하는 것은 이 인스턴스 밖에 없습니다. 
 이런 식으로 사용자의 데이터는 안전하게 보관 및 보호됩니다.
 
 - 서브넷 게이트웨이 및 라우팅
 표준 3티어 아키텍처를 살펴보면 이해해야 할 점이 많습니다. 
 애플리케이션 서버로 이동한 후 마스터 대기 구성에서 실행 중인 데이터베이스와 통신하는 프런트엔드 웹 서버로 시작하는 애플리케이션을 구축하는 것을 고려하면서 네트워크 구성 요소들을 먼저 살펴보기로 하겠습니다. 
 먼저 물리적 인프라부터 살펴보겠습니다. 
 3티어 애플리케이션을 갖고 있음에도 불구하고 처음부터 항상 높은 가용성에 대해 생각한다는 개념입니다. 
 이는 여러 가용 영역(AZ)이 있는 AWS 리전에서 실행됨을 의미합니다. 
 모든 AWS 리전은 적어도 2개 이상의 가용 영역(AZ)을 갖고 있습니다. 
 물론 2개 이상의 가용 영역을 사용할 수 있는데 다만 여기서는 편의상 필요에 따라 더 많이 분산할 수 있는 가용 영역 AZ-1 및 AZ-2에 위치해 보겠습니다. 
 모든 것은 VPC 내부에 있습니다. 
 이번 시간에는 10.10/16의 CIDR(Classless Inter-Domain Routing) 블록을 VPC를 위한 네트워킹 구성 요소로 선택한 다음, 갖고 있는 자산을 여러 서브넷으로 분할했습니다. 
 VPC 내부에 있는 모든 것은 하나의 서브넷 내에 있으며, 해당 서브넷들은 모두 충돌할 수 없는 자체 CIDR 블록을 갖고 있습니다. 
 서브넷들은 고유한 것이어야 하며, 이들은 모두 코어 VPC 사이트 또는 블록 그 자체의 부분 집합에 속합니다. 
 여기서 할 일은 항상 자산을 보호한다는 목적으로 해당 부분 집합 내에서 통신을 처리하는 것입니다. 
 아키텍처를 어떤 방법으로 사용하고 싶습니까? 과연 AWS는 데이터베이스와 직접 통신하는 인터넷에서 사용자들을 밖에 위치시키고 싶어 하겠습니까? 아니요. 
 그렇지 않습니다. 
 AWS는 사용자들이 프런트 애플리케이션 웹 서버를 통과하기를 늘 원하고 있으며 거기서부터 애플리케이션 서버는 서로 통신합니다. 
 애플리케이션 서버는 데이터베이스와 통신합니다. 
 다만 외부 통신으로부터 데이터베이스를 보호함으로써 해킹, 침입 또는 남용으로부터 보호를 받게 됩니다. 
 애플리케이션의 GUI를 중심으로 배치할 방어선들은 많습니다. 
 먼저 네트워킹 구성 요소부터 살펴보겠습니다. 
 단일한 서브넷 내부에 모든 것을 배치하는 대신, 가용 영역(AZ)들을 퍼블릭 서브넷과 프라이빗 서브넷으로 분할합니다. 
 온프레미스 네트워킹을 수행했다면 서브넷에 대해 생각할 때 먼저 스위치와 라우터에 대해 생각해보고 모든 자산이 서로 통신하거나 혹은 동일한 스위치 및 동일한 라우터에서 통신을 한 후 다른 자산들이 개별 라우터에 위치하는지 확인합니다. 
 AWS에서 이것은 서브넷의 목적에 해당되지 않습니다. 
 AWS 네트워크가 내부에서 작동하는 방식을 전반적으로 다룬 논의는 있습니다. 
 다만 이것은 이번 시간에 논의할 주제에 속하지는 않습니다. 
 이에 관한 내용을 알고 싶다면 “A Day In the Life of a Billion Packets(수십억 개의 패킷이 가진 수명 중 하루)”라는 제목의 동영상을 시청해 보시기 바랍니다. 
이번 시간에는 퍼블릭 서브넷 내부의 모든 자산이 외부 인터넷과 직접 통신하게 될 자산이라는 사실에 대해 논의해보려고 합니다. 
 물론 여기서 직접 다루지는 않을 모든 자산은 프라이빗 서브넷 내에 위치할 것입니다. 
 여러 개의 프라이빗 서브넷을 가질 수 있을까요? 물론 가질 수 있습니다. 
 지침 목적상 도표는 단순화하고 있습니다. 
 퍼블릭(public) 즉, 공개적인 속성을 결정하는 것은 무엇입니까? 프라이빗(private) 즉, 비공개적인 속성을 결정하는 것은 무엇입니까? 그것은 바로 외부에 대한 접근(액세스)입니다. 
 VPC는 승인된 통신을 제외하면 트래픽의 입력 또는 출력을 허용하지 않는 가상 프라이빗 클라우드(virtual private cloud)로 정의됩니다. 
 트래픽 입력 또는 출력은 게이트웨이를 통해 이루어집니다. 
 여기서 살펴볼 첫 번째 게이트웨이는 IGW(internet gateway, 인터넷 게이트웨이)입니다. 
 이 게이트웨이를 활용하면 외부 인터넷의 통신을 VPC와 승인된 모든 서브넷에 전달할 수 있습니다. 
 이것은 라우팅(routing)에 대해 설명할 때 첫 번째 단계에 해당합니다. 
 VPC는 AWS 내 그 밖의 모든 VPC에 걸쳐 트래픽을 격리하도록 설계되어 있습니다. 
 VPC 내부의 자산은 VPC 그 자체와 그 외 모든 사용자 혹은 VPC의 나머지 구성 요소 간에 패킷을 전송하는 방법을 이미 알고 있습니다. 
 이는 라우팅 테이블을 통해 이루어집니다. 
 VPC를 콘솔 내에서 생성했는지 혹은 API를 통해 생성했는지 여부에 관계없이 기본 라우팅 테이블(RT) 개체는 자동으로 생성되었습니다. 
 이 개체는 1개의 라인만 갖습니다. 
 이 경우, 이 개체는 VPC 그 자체(10.10/16)의 CIDR 블록에 해당합니다. 
 그렇다면 그 대상은 어디에 있을까요? 대상은 로컬에 있습니다. 
 그 밖에 해야 할 일은 없지만 이는 서브넷과는 별도로 프라이빗 IP 주소와 관계없이 VPC 내부의 모든 자산은 패킷을 전송할 하나의 경로가 있음을 의미합니다. 
 이것은 패킷이 승인되었음을 의미하지는 않습니다. 
 본 강의의 뒷부분에서는 보안 그룹과 액세스 통제 목록(ACL)에 대해 살펴볼 것입니다. 
 다만 이것은 패킷이 최소한 다른 인스턴스에 도달하는 방법을 알고 있음을 의미합니다. 
 그러나 딱 거기까지만 알고 있는 것이죠. 
 퍼블릭 서브넷의 인스턴스에서 어떤 패킷이 IGW 밖으로 빠져나가 인터넷(interwebs)으로 들어가거나 혹은 그 반대로 이동할 수 있기를 원한다면 말입니다. 
 인터넷을 프라이빗 엔터티 또는 퍼블릭 인스턴스에 연결하고 싶다면 IGW를 통해 빠져나가는 경로를 제공하는 라우팅 라인(route line)을 추가해야 합니다. 
 해당 액세스 권한은 기본 라우팅 테이블에 바로 입력할 수 있습니다. 
 문제는 이것이 모든 인스턴스에 적용된다는 점이며, 수행 중인 작업의 전체 지점은 바로 여기서 GUI 중심을 보호하는 데 그 목적이 있습니다. 
 그 대신, AWS는 새로운 개체 즉, 퍼블릭 라우팅 테이블을 만듭니다. 
 사용자가 새 라우팅 테이블을 만들면 이 테이블은 기본 라우팅 테이블에 수록된 모든 내용을 자동으로 복사합니다. 
 그래서 새 라우팅 테이블은 10.10.0.0/16으로 시작합니다. 
 이 테이블은 항상 로컬로 유지됩니다. 
 여기서 우리가 하려는 작업은 퍼블릭 라우팅 테이블을 승인하는 것입니다. 
 이 테이블과 관련된 모든 자는 IGW에 대한 액세스 권한을 갖습니다. 
 우리가 대상으로 삼고 있는 IP는 무엇입니까? 우리는 전체 IPV 4 스펙트럼을 대상으로 삼고 있습니다. 
 그렇다면 표기법은 어떻게 인용하고 있을까요?  0.0.0.0/0은 IGW 밖으로 나갑니다. 
 IGW는 물리적 서버가 아니며 네트워크 구성체(network construct)에 속합니다. 
 IGW는 10.10/16에 해당되지 않는 것으로서 대상이 된 모든 콘텐츠가 해당 리전 외부로 나가 DNS 검색 시 콘텐츠 전송의 대상 위치로 들어갈 수 있도록 자동으로 허용된다는 것을 시사하는 고가용성의 네트워크 구성체에 속합니다. 
 마지막으로 해야 할 일은 퍼블릭 라우팅 테이블을 퍼블릭 서브넷과 연결하는 것입니다. 
 웹 인스턴스에서 발신되거나 웹 인스턴스를 대상으로 하는 모든 패킷은 10.10/16 밖으로 나간다면 이제 IGW 밖으로 이동하는 경로를 갖게 되며 필요한 것이 무엇인지를 확인한 후 반환하게 됩니다. 
 0.0.0.0/0을 프라이빗 서브넷에 추가하지는 않았기 때문에 누군가 이 인스턴스의 IP 주소를 알고 있더라도 패킷이 이 인스턴스를 통과하려는 모든 시도는 즉시 중단됩니다. 
 패킷이 이 인스턴스를 건드리지 못하는 이유는 IGW가 이 서브넷과 연결될 수 있도록 허용하는 라우팅 라인이 없는데다 건드릴 수 없는 것들을 해킹할 수는 없기 때문입니다. 
 이만하면 훌륭하군요. 
 다만 또 다른 문제가 있습니다. 
 즉, 어떤 지점에서는 이 자산이 인터넷으로 빠져나가 하나의 패치를 선택할 수 있게끔 하고 싶을 때가 있는 것이죠. 
 예를 들면, 이것이 Oracle 데이터베이스라고 할 때 나는 oracle.com 사이트로 이동해 Oracle의 최신 버전을 다운로드하여 내 시스템을 업데이트해야 합니다. 
 시스템 업데이트는 바로 이런 식으로 이루어집니다. 
 인스턴스는 IGW로 이동을 시도하라는 요청을 할 것입니다. 
 DNS 검색 요청을 할 경우, oracle.com 사이트로 이동하면 DNS는 54.<주소>에 있다고 알려줄 것입니다. 
 주소를 구성하고 있습니다. 
 어떤 지점에서 인스턴스는 ‘좋습니다. 이제 패킷을 54.<주소>로 전송하십시오’라고 알리게 됩니다. 
 내 라우팅 테이블은 무엇입니까? 그것은 기본 라우팅 테이블 54.<주소>와 계속 관련됩니다. 
 내 라우팅 테이블은 10.10/16의 일부인가요? 아니요. 
 그렇지 않습니다. 
 다른 선택의 여지가 없다면 패킷은 제거됩니다. 
 패킷은 외부로 통신할 수 없습니다. 
 이러한 일이 발생하려면 이 아키텍처에 몇 가지 다른 요소들을 추가해야 합니다. 
 퍼블릭 라운드 테이블을 연결할 수 있습니다. 
 이 때 문제가 되는 것은 자산이 노출되었다는 점이며 또한 이 서브넷을 퍼블릭 상태로 공개함으로써 시간을 들여 마련한 모든 보안 프로토콜을 위반했다는 점입니다. 
 서브넷의 이름은 여전히 프라이빗(private)으로 불리겠지만 000, IGW 액세스 권한을 갖고 있기 때문에 사실상 퍼블릭 서브넷이나 다름이 없습니다. 
 이름은 관련이 없습니다. 
 이것의 이름은 사람, 곰, 돼지 등으로 부를 수 있습니다. 
 여러분이 그 이름을 어떻게 부르든 상관은 없습니다. 
 이 서브넷이 000 IGW 액세스 권한을 갖고 있다면 퍼블릭 서브넷에 해당됩니다. 
 AWS는 이 서브넷을 어떻게 보호할까요? 먼저 이 시나리오에서 해결하려는 2가지 통신 경로를 살펴보겠습니다. 
우리가 해결하려는 첫 번째 경로는 외부에서의 통신입니다. 
 그 이유는 여러분이 갖고 있는 것이 패치를 필요로 하는지 여부를 확인하기 위해 우선적으로 데이터베이스에 대한 액세스 권한을 획득하려는 데이터베이스 관리자(DBA)이기 때문입니다. 
 저는 외부의 Starbucks 또는 그 외 다른 곳에서 있는 DBA와 일하고 있습니다. 
 DBA는 로그인을 시도해 데이터베이스에 연결하려고 합니다. 
 지금 당장에는 데이터베이스가 DBA의 패킷을 허용할 방법은 없기 때문에 이렇게 하려면 아키텍처에 또 다른 개체를 추가해야 합니다. 
 지금까지 실행된 모든 보안 메커니즘을 위반하는 퍼블릭 라우팅 테이블을 다시 한 번 할당할 수도 있습니다. 
 이것은 접속 호스트(bastion host)라 불리는 새 인스턴스에 해당되며, 접속 호스트 또는 점프 박스(jump box)의 전체 개념은 DBA 또는 시스템 관리자가 퍼블릭 인스턴스에 로그인하기 위한 장소(landing place)를 제공하는 것입니다. 
 DBA 또는 시스템 관리자는 퍼블릭 인스턴스에 로그인한 상태에서 로그인 권한이 있는 그 밖의 모든 인스턴스에 대해서도 로그인을 실행할 수 있는데 그 이유는 (그들이) 이미 VPC 내에 있기 때문입니다. 
 이제 접속 호스트는 승인되지 않은 액세스를 방지하기 위해 그 주위에 모든 종류의 보안 그룹과 액세스 통제 목록(ACL) 보호 기능을 갖게 됩니다. 
 다만 그러한 보안 그룹과 ACL 보호 기능이 열려 있다고 가정한다면 DBA는 접속 호스트에 로그인할 수 있으며 접속 호스트에 대한 루트 액세스 권한을 획득하게 되는데, 이 지점에서 DBA는 접속 호스트로부터 데이터베이스 또는 원하는 그 밖의 위치로 직접 연결할 수 있습니다. 
 그 이유는 퍼블릭 서브넷이 로컬 라인을 갖고 있기 때문이며 이는 패킷이 로컬 라인에서 내부의 다른 위치까지 그 경로를 인지하고 있음을 의미합니다. 
 이것은 승인된 것일까요? 아니면 승인되지 않은 것일까요? 그것은 다른 대화에 속합니다. 
 그럼 승인된 것으로 가정해 보겠습니다. 
 이제 DBA는 데이터베이스상의 한 루트(root)로서 로그인을 할 수 있으며 예를 들면 oracle.com 사이트로 이동하기 위해 필요한 모든 검사를 수행할 수 있을 뿐만 아니라 저장소가 있는 모든 위치에서 최신 패치를 다운로드할 수 있습니다. 
 접속 서버(bastion server)를 통한 연결이 되더라도 인스턴스에서 패킷을 시작해 IGW 밖으로 복귀할 수 있도록 액세스 권한이 DBA에 부여되지는 않습니다. 
 이는 이동했던 경로를 단순히 추적하지는 않습니다. 
 따라서 프라이빗 서브넷에서 시작된 트래픽을 IGW 밖으로 다시 내보낼 수 있도록 허용하려면 또 다른 개체가 필요합니다. 
 다시 한 번 뭔가를 추가해야 합니다. 
 그래서 한 가지 구식 수법을 활용합니다. 
 그것은 바로 네트워크 주소 변환 서버(network address translation server)를 의미하는 NAT 서버입니다. 
 이 서버는 AWS가 새로 고안한 것이 아니며 작동 원리는 매우 간단합니다. 
 NAT 서버는 단순히 전체 인터넷인 것처럼 작동합니다. 
 로그아웃을 시도 중인 데이터베이스는 NAT 서버를 대상으로 하면서 이 서버가 전체 인터넷인 것으로 판단합니다. 
 NAT 서버는 기본적으로 ‘예, 허용합니다. 
 저는(NAT 서버는) 귀하가 찾고 계신 모든 주소에 해당됩니다’라는 메시지를 전달한 후, 패킷을 리디렉션합니다. 이러한 동작이 이루어지려면 새 라우팅 테이블(즉, 프라이빗 라우팅 테이블)이 필요합니다. 
 늘 그렇듯이 AWS는 기본 라우팅 테이블의 사본부터 먼저 만드는데, 이는 기본 테이블에서 모든 것을 복사한다는 것을 의미하며 그래서 내 로컬 라인인 10.10.0.0/16은 일반 로컬 라인에 해당됩니다. 
 새 라인은 000으로 처리하는 작업에 해당됩니다. 
 IGW 밖으로 이동하지는 않습니다. 
 그 대신, 이 경우에는 내 0.0.0.0/0이 NAT-ID 밖으로 이동합니다. 
 우리는 단지 이 특정 인스턴스를 향해 외부로 이동하도록 되어 있는 모든 트래픽을 대상으로 한 후, 프라이빗 라우팅 테이블을 2개의 프라이빗 서브넷에 모두 연결합니다. 
 패킷 트래픽은 다음과 같은 방식으로 이동합니다. 
 즉, DBA는 데이터베이스에서 시작해 루트에서 로그인됩니다. 
 이제 DBA는 패킷 송신을 시작하려 합니다. 
 따라서 DNS 검색에서는 54.<주소>로 이동 중으로 나타납니다. 
 이제 패킷은 중단되며 10.10 부분이 아닌, 라우팅 테이블 54.<주소>를 검사합니다. 
 또 다른 옵션이 있습니까? 000이 있군요. 
 우리는 그것과 정확히 일치하므로 당신은 전체 인터넷(entire internet)임에 틀림이 없습니다. 
 그렇다면 NAT는 ‘예, 저는 54.<주소>입니다’라는 메시지를 알립니다. 
 그런데 이 기능이 작동하려면 원본/대상 확인 기능을 꺼야 합니다. 
 이 기능은 기본적으로 ‘켜짐(on)’으로 설정되어 있으며 여러분의 자산을 “중간자(man in the middle)”형 공격으로부터 보호하는 기능이기 때문에 매우 중요합니다. 
 이 경우에는 해당 기능이 꺼져 있기 때문에 NAT는 이 기능이 켜진 것처럼 가장하여 ‘저는 당신의 요구에 따라 54.<주소>에 있어야 할 아무개입니다’라는 메시지를 알립니다. 
 NAT는 실제로 그렇지 않으면서도 그런 척 하는 것입니다. 
 그런 다음, 패킷을 수신하여 다시 로드합니다. 
 NAT는 IGW에 액세스했는데 그 이유는 퍼블릭 테이블에 액세스할 권한이 있기 때문입니다. 
 NAT는 패킷을 밖으로 전송하고 필요한 패킷을 모아서 후퇴시키며 다시 패키지로 만들어 되돌려 보냅니다. 
 그러면 해당 프로세스가 완료된 것입니다. 
 프라이빗 라우팅 테이블이 NAT를 대상으로 하도록 허용하면 밖으로 나갈 수 있으며 혹은 내 퍼블릭 라우팅 테이블은 NAT 액세스 권한을 사용할 필요 없이 직접 나갈 수 있습니다. 
 어떤 경우든 간에 시작해야 할 일은 거기에 있습니다. 
 그것을 허용할까요? 하지 말까요? 이는 권한 문제에 속합니다. 
 그것은 또 다른 대화에 속하며 다만 지금은 적어도 패킷을 밖으로 내보냅니다. 
 제가 언급하고 싶은 퍼즐 조각이 하나 더 있습니다. 
 그것은 하이브리드 옵션들을 다루고 있습니다. 
 내 DBA가 Starbucks에서 연결할 수 있도록 허용하고 싶지 않다면 어떨까요? DBA가 프라이빗 데이터 센터에 로그인했거나 혹은 VPN 터널을 통해 로그인한 경우에만 연결할 수 있도록 요구하고 싶다면 어떨까요? 이러한 경우에는 내 프라이빗 데이터 센터가 있습니다. 
 이 경우, 내 프라이빗 데이터 센터가 해당되는데, 저는 프라이빗 데이터 센터에서 로그인할 것을 내 DBA에 요구할 것이며 이제 VPC에 VPN 연결을 구성할 것입니다. 
 IGW를 경유하지는 않겠습니다. 
 그 대신, VGW라 불리는 새 게이트웨이 또는 가상 프라이빗 게이트웨이를 사용할 것입니다. 
 이제 직접 통신이 필요한 서브넷에 라우팅 테이블 라인을 추가하면 액세스 권한을 설정할 수 있습니다. 
 아마도 내 DBA가 프라이빗 데이터 센터에서 들어오면 데이터베이스에 직접 연결하도록 허용할 필요가 있을 것 같습니다. 
 그래서 내 프라이빗 데이터 센터가 172.68.0.0/16의 IP 체계를 사용하면 해당 라인을 추가하기만 하면 됩니다. 
 이제 어디로 가고 있는 걸까요? VGW 액세스입니다. 
 여기서는 DBA가 프라이빗 데이터 센터에서 직접 통신을 시작한 경우에만 접속 서버를 경유할 필요 없이 해당 인스턴스와 직접 통신할 수 있게 됩니다. 
 그것은 모범 사례 요소에 해당되나요? 모든 구현은 고유한 것이 됩니다. 
 여러분은 각자의 선택을 하게 될 것이며 다만 이러한 선택은 AWS VPC(Virtual Private Cloud)에서 네트워킹 옵션을 구축하기 시작할 때 고려할 필요가 있는 일련의 기본 도구가 됩니다. 
 

 

 


 - 강의 요약 
 안녕하세요. 
 저는 AWS(Amazon Web Services) 교육 및 자격증 팀 담당자 Jody Soeiro de Faria입니다. 
 여러분은 AWS 클라우드 실무자 에센셜 과정을 완료했습니다. 
 이제 지금까지 학습한 내용을 간략히 정리하면 다음과 같습니다. 
 · 섹션 1에서는 클라우드의 가치와 AWS 클라우드 채택에 따른 이점들을 이해하였습니다. 
 · 섹션 2에서는 몇몇 AWS 범주와 서비스, 서비스의 기능 및 서비스 사용 시기와 방법에 대해 배웠습니다. 
 · 섹션 3에서는 AWS가 클라우드 보안에 접근하는 방법에 대해 알아보았으며 AWS 공동 책임 모델, AWS 액세스 제어 및 관리, AWS 보안 및 규정 준수 프로그램은 물론, AWS 클라우드 보안 옵션을 보다 잘 이해할 수 있도록 도움을 주기 위해 제공되는 리소스에 대해서도 살펴보았습니다. 
 · 섹션 4에서는 Well Architected Framework를 포함하는 AWS 아키텍처 설계와 웹 호스팅의 내결함성 및 고가용성을 위한 참조 아키텍처에 대해 살펴보았습니다. 
 · 섹션 5에서는 요금 및 지원에 대해 알아보았습니다. 
 TCO 계산기 및 AWS 지원 계획뿐만 아니라 몇몇 주요 서비스에 대한 요금 및 요금 요소의 기본적인 사항들도 살펴보았습니다. 
 · 보너스 자료에서는 본 과정의 핵심 요소에 관한 추가적인 세부 정보를 제공하는 몇몇 동영상들을 시청했습니다. 
 이번 과정에서 즐거운 학습 경험을 얻었기를 바랍니다. 
 본 과정에 관한 의견은 언제든지 환영합니다. 
 사후 교육 평가를 꼭 완료하시기 바랍니다. 
 또한 의견이 있으시면 이메일 AWS-course- feedback@Amazon.com으로 보내시면 됩니다. 
 저는 AWS(Amazon Web Services) 교육 및 자격증 팀 담당자 Jody Soeiro de Faria였습니다.

반응형


반응형

AWS Cloud Practitioner Essentials (Digital) (Korean) - 01

 

AWS training and certification

 

www.aws.training

 

요약본 : https://github.com/yoonhok524/aws-certifications/tree/master/0.%20Cloud%20Practitioner 

 

yoonhok524/aws-certifications

AWS 자격증 취득을 위한 관련 내용 정리. Contribute to yoonhok524/aws-certifications development by creating an account on GitHub.

github.com

정보

설명

AWS 클라우드 실무자 에센셜 과정은 특정 기술적 역할과 상관없이 AWS 클라우드 전반에 대한 이해를 원하는 사람을 위해 마련된 것입니다. 이 과정에서는 클라우드 개념, AWS 서비스, 보안, 아키텍처, 요금 및 지원의 세부 개요를 제공합니다.

과정 목표

이 과정을 이수하면 수강생은 다음을 수행할 수 있습니다.

  • AWS 클라우드 개념 및 기본 글로벌 인프라를 정의
  • AWS 플랫폼에서 제공되는 주요 서비스 및 일반적 사용 사례(예: 컴퓨팅, 분석 등)를 설명
  • 기본 AWS 클라우드 아키텍처 원리를 설명
  • AWS 플랫폼 및 공동 보안 모델의 기본 보안 및 규정 준수 측면을 설명
  • 청구, 계정 관리 및 요금 모델을 정의
  • 설명서 또는 기술 지원 출처를 파악(예: 백서, 지원 티켓 등)
  • AWS 클라우드 가치 제안을 설명
  • AWS 클라우드에서의 배포 및 운영에 대한 기본/핵심 특징을 설명

학습 대상

본 교육 과정의 대상은 다음과 같습니다.

  • 영업
  • 법무
  • 마케팅
  • 비즈니스 분석가
  • 프로젝트 관리자
  • 최고 경험 책임자
  • AWS 아카데미 학생
  • 기타 IT 관련 전문가

사전 조건

이 과정은 입문 수준의 과정이지만 수강생이 다음을 보유하고 있다는 가정하에 진행됩니다.

  • 일반 IT 기술 지식
  • 일반 IT 비즈니스 지식

전달 방법

이 과정은 일련의 짧은 과정 모듈을 통해 제공됩니다.

기간
7시간

과정 개요

이 과정에서는 다음 주제를 다룹니다.

AWS 클라우드 실무자 에센셜: 소개

5분

AWS 클라우드 개념 에센셜

30분

  • 클라우드 소개
  • AWS 클라우드 소개

AWS 핵심 서비스 에센셜

3시간

  • 서비스 및 카테고리 개요
  • AWS 글로벌 인프라 소개
  • Amazon VPC 소개
  • 보안 그룹 소개
  • 컴퓨팅 서비스 소개
  • AWS 스토리지 서비스 소개
  • AWS 데이터베이스 솔루션 소개

AWS 보안 에센셜

1시간

  • AWS 보안 소개
  • AWS 공동 책임 모델
  • AWS 액세스 제어 및 관리
  • AWS 보안 규정 준수 프로그램
  • AWS 보안 리소스

AWS 아키텍처 설계 에센셜

45분

  • Well-Architected 프레임워크 소개
  • 참조 아키텍처: 내결함성 및 고가용성
  • 참조 아키텍처: 웹 호스팅

AWS 요금 및 지원 에센셜

45분

  • 요금 기본 정보
  • 요금 내역
  • TCO 계산기 개요
  • AWS Support 플랜 개요

AWS 클라우드 실무자 에센셜: 추가 자료

1시간

  • AWS 클라우드 실무자 에센셜 콘텐츠 모듈에서 배운 개념을 강화하는 보충 동영상

AWS 클라우드 실무자 에센셜: 과정 요약

5분

 

 

1. 과정소개

안녕하세요, Amazon Web Services 교육 및 자격증의 Jody Soeiro de Faria입니다. 
 클라우드 전문가 에센셜에 참가하신 것을 환영합니다. 
 이 과정은 특정 기술적 역할과 상관없이 AWS 클라우드 전반에 대한 이해를 원하는 사람을 위해 마련된 것입니다. 
 이 과정에서는 AWS 서비스, 보안, 아키텍처, 요금 및 지원의 세부 개요를 설명합니다. 
 이 과정을 성공적으로 수료하면 다음과 같은 일을 할 수 있습니다. 
 · AWS 클라우드 개념 및 기본 글로벌 인프라를 정의· AWS 플랫폼에서 제공되는 주요 서비스 및 일반적 사용 사례를 설명
 · 기본 AWS 클라우드 아키텍처 원리를 설명
 · AWS 플랫폼 및 공동 보안 모델의 기본 보안 및 규정 준수 측면을 설명
 · 청구, 계정 관리 및 요금 모델을 정의 
 · 설명서 또는 기술 지원의 소스를 식별 
 · AWS 클라우드 가치 제안을 설명 
 · AWS 클라우드에서의 배포 및 운영의 핵심적 기본 특징을 설명
 이 과정은 초급 과정이지만 학습자가 전반적인 IT 기술 지식 및 IT 비즈니스 지식을 보유한 것으로 가정합니다. 
 이 과정은 일련의 짧은 동영상 모듈 및 지식 평가를 통해 전달됩니다. 
 이 과정을 이수하려면 약 6시간이 걸립니다. 
 클라우드 전문가 에센셜은 이 개요, 5개 콘텐츠 모듈, 보너스 자료 및 과정 요약으로 구성됩니다. 
 · 섹션 1에서는 AWS 클라우드 개념을 설명합니다. 
 이 섹션에는 클라우드에 대한 소개와 AWS 클라우드에 대한 소개가 포함됩니다. 
 · 섹션 2에서는 AWS 핵심 서비스를 설명합니다. 
 이 섹션은 서비스 및 범주의 개요, AWS 글로벌 인프라, Amazon VPC, 보안 그룹, Amazon EC2, Amazon Elastic Block Store, Amazon S3, AWS 데이터베이스 솔루션에 대한 소개로 구성됩니다. 
 · 섹션 3에서는 AWS 보안을 설명합니다. 
 이 섹션에는 AWS 보안, AWS 공동 책임 모델, AWS 액세스 제어 및 관리, AWS 보안 규정 준수 프로그램, AWS 보안 리소스에 대한 소개가 포함됩니다. 
 · 섹션 4에서는 AWS 아키텍처 설계를 설명합니다. 
 이 섹션에는 Well Architected Framework, 참조 아키텍처, 내결함성 및 고가용성, 참조 아키텍처 웹 호스팅에 대한 소개가 포함됩니다. 
 · 섹션 5에서는 AWS 요금 및 지원을 설명합니다. 
 이 섹션에는 요금 기본 사항, Amazon EC2, Amazon S3, Amazon EBS, Amazon RDS 및 Amazon CloudFront의 요금 내역, TCO Calculator 개요, AWS 지원 플랜 개요가 포함됩니다. 
 · 이 과정에 포함된 보너스 자료에는 이 과정을 통해 학습한 내용을 강화해 주는 여러 보충 동영상이 포함됩니다. 
 즐거운 학습 경험이 되기를 바랍니다. 
 Amazon Web Services 교육 및 자격증의 Jody Soeiro de Faria였습니다. 


2. 클라우드 개념

안녕하세요, Amazon Web Services 교육 및 자격증의  Jody Soeiro de Faria입니다. 
오늘은 AWS 클라우드에 대해 소개합니다. 

이 과정은 클라우드 전문가가 되기 위한  과정이므로 클라우드 컴퓨팅의 정의부터 시작해 보겠습니다. 

" 클라우드 컴퓨팅"이란 인터넷을 통해 IT 리소스와  애플리케이션을 온디맨드로 제공하는 서비스를 말하며 요금은  사용한 만큼만 청구됩니다. 
클라우드 컴퓨팅 이전에는 이론적으로 추측한 최대 피크를 기반으로 용량을 프로비저닝해야 했습니다. 
  예측한 최대 피크에 미치지 않거나 이를 초과할 경우  고가에 구입한 리소스가 유휴 상태를 유지하거나 용량 부족 때문에 수요를 충족하지 못하게 될 수 있습니다. 
  설치 공간, 전력, 냉방 등의 간접비는 덤입니다. 
하지만 AWS를 사용할 경우 서버, 데이터베이스, 스토리지, 상위 수준 애플리케이션 구성 요소를 몇 초 만에 시작할 수 있습니다. 
 이들을 일시적이고 처분 가능한 리소스로 취급할 수 있으므로 고정적이고 유한한 IT 인프라라는  비융통성과 제약에서 벗어날 수 있습니다. 
 AWS 클라우드의  장점을 활용함으로써 관리, 테스트, 안정성, 용량 계획에  보다 민첩하고 효율적으로 접근할 수 있습니다. 
기업들이  클라우드로 마이그레이션하는 주된 이유 한 가지는 향상된  민첩성입니다. 
 민첩성에는 다음 세 가지 요소가 영향을  미칩니다. 
· 속도· 실험· 혁신의 문화 어떻게 이러한 요소가  조직이 클라우드 컴퓨팅의 장점을 활용하여 민첩성을 개선할 수 있게 하는지 보다 자세히 살펴봅시다. 
AWS 시설은  전 세계에 분포하고 있으므로 몇 분 만에 글로벌  확장이 가능하도록 지원할 수 있습니다. 
 고객이 있는 곳에 자체 데이터 센터를 두는 것은 비용 면에서 불가능할 수 있습니다. 
 하지만 AWS를 사용하면 막대한 투자를  할 필요 없이 이점을 활용할 수 있습니다. 
 클라우드  컴퓨팅에서는 새 IT 리소스를 클릭 몇 번으로 사용할  수 있습니다. 
 즉 개발자는 해당 리소스를 몇 주가  아니라 단 몇 분 만에 사용할 수 있으므로 조직의 민첩성이 극적으로 향상됩니다. 
클라우드 컴퓨팅의 민첩성 이점 또 한 가지는 보다 자주 실험을 할 수 있다는 것입니다. 
 AWS를 사용할 경우 클라우드에서 코드로서의  운영이 가능하며 안전하게 실험하고, 운영 절차를 개발하고, 장애를 대비해 연습할 수 있습니다. 
 예를 들어 AWS를 사용하면,
 · 몇 분 만에 실험을 위한 서버 시동
 ·  서버를 반환하거나 다른 실험을 위해 재사용
 가상 리소스 및  자동화 가능한 리소스를 통해, 여러 유형의 인스턴스, 스토리지 또는 구성을 사용하여 비교 테스트를 신속하게 수행할 수 있습니다. 
 AWS CloudFormation을 사용하면  일관적이고 템플릿화된 샌드박스 개발, 테스트 및 프로덕션  환경을 보유하고 운영 제어 수준을 지속적으로 향상시킬  수 있습니다. 
 방금 언급한 대로, 클라우드 컴퓨팅에서는  낮은 비용과 위험으로 신속한 실험이 가능합니다. 
 이는  IT에 매우 중요합니다. 
 보다 빈번한 실험을 통해 새로운 구성과 혁신을 탐색할 수 있기 때문입니다. 
AWS가 클라우드 컴퓨팅의 민첩성을 어떻게 활용하는지 이해하기 위해서는  컴퓨팅 리소스의 탄력성, 확장성 및 안정성을 지원하는 AWS  인프라를 살펴보아야 합니다. 
AWS 클라우드 인프라는 리전  및 가용 영역("AZ")을 중심으로 구축되어 있습니다. 
  리전이란 전 세계에 산재한 복수의 가용 영역을 포함하는 물리적 장소입니다. 
 가용 영역은 하나 이상의 개별  데이터 센터로 구성되는데, 각 데이터 센터는 별도의  시설에 자리하며 예비 전력, 네트워킹 및 연결 수단을  갖추고 있습니다. 
 가용 영역은 프로덕션 애플리케이션 및  데이터베이스를 운영할 수 있는 기능을 제공합니다. 
 이러한  애플리케이션 및 데이터베이스는 단일 데이터 센터에서 가능한 것보다 더 높은 수준의 가용성, 내결함성 및 확장성을  제공합니다. 
 내결함성은 시스템 구성 요소에 장애가  발생하더라도 시스템이 작동 가능 상태를 유지하는 능력을  의미합니다. 
 이는 애플리케이션 구성 요소의 기본적인  중복성으로 볼 수 있습니다. 
 고가용성은 사용자가 개입할  필요 없이 시스템이 항상 작동하고 액세스 가능하며 가동 중지가 최소화되도록 해줍니다. 
AWS 클라우드를 사용하면 확장 가능하고 안정적이며 안전한 글로벌 인프라의 이점을  활용하여 요구 사항을 최대한 충족할 수 있습니다. 
민첩성과  관련하여 탄력성도 클라우드 컴퓨팅에서 강력한 장점입니다. 
  탄력성이란 간편하게 컴퓨팅 리소스의 규모를 확장 또는  축소할 수 있다는 것을 뜻하며 사용한 실제 리소스에  대해서만 지불하면 됩니다. 
 AWS의 탄력적 특성을 다음과  같이 활용할 수 있습니다. 
· 새로운 애플리케이션을 신속하게 배포
· 워크로드가 커지면 즉시 확장
· 더는 필요하지  않은 리소스는 즉시 가동 중지
· 축소하면 인프라 비용을 지불하지 않음
필요한 가상 서버가 한 개이든 수천  개든, 컴퓨팅 리소스가 필요한 시간이 몇 시간이든  온종일이든 상관없이 AWS는 고객의 필요를 충족하기 위한  탄력적인 인프라를 제공합니다. 
 AWS의 주요 이점 중 한 가지는 고객이 원하는 속도로 서비스를 사용할 수  있다는 점입니다. 
 AWS를 사용하는 고객들은 계절적 수요  변동에 맞춰 서비스 소비를 확대, 축소 및 조정하거나  신규 서비스 또는 제품을 출시하거나 새로운 전략적  방향을 쉽게 수용할 수 있습니다. 
AWS는 높은 가용성과  신뢰성을 갖춘 확장 가능한 클라우드 컴퓨팅 플랫폼을  제공하며, 이를 통해 고객들이 다양한 애플리케이션을 실행할 수 있는 도구를 제공합니다. 
AWS 도구인 Auto Scaling  및 Elastic Load Balancing을 사용하여 애플리케이션의 규모를 수요에 맞춰 확장하거나 축소할 수 있습니다. 
 Amazon의 거대한 인프라의 힘을 빌어, 필요할 때면 언제든  컴퓨팅 및 스토리지 리소스에 액세스할 수 있습니다. 
AWS를  이용하면 전 세계에 분포된 여러 리전에서 용이하게  시스템을 배포할 수 있으며, 이와 동시에 최소한의  비용으로 최종 고객에게 낮은 지연 시간과 향상된 환경을 제공할 수 있습니다. 
 AWS가 구현한 규모의 효율성  덕분에 고객은 여러 번의 구입 주기와 많은 비용이  소요되는 평가를 거칠 필요 없이 일관되게 혁신적인  서비스와 첨단 기술을 사용할 수 있습니다. 
 AWS는 거의 모든 워크로드를 지원할 수 있습니다. 
 이러한 혁신  수준 덕분에 고객들은 최신 기술에 지속적으로 액세스할 수  있습니다. 
또한 고객이 데이터가 실제로 위치하는 리전에  대한 모든 제어권 및 소유권을 보유하여 지역별 규정  준수 및 데이터 상주 요구 사항을 용이하게 충족할 수  있다는 것도 알아둘 필요가 있습니다. 
클라우드 컴퓨팅  이전에는 인프라 보안 감사가 흔히 정기적으로 실시되는  수동 방식의 프로세스였습니다. 
 하지만 AWS 클라우드는 고객의 IT 리소스에 대한 구성 변경 사항을 지속적으로  모니터링할 수 있는 거버넌스 기능을 제공합니다. 
 또한  AWS는 가장 엄격한 요건도 충족할 수 있도록 시설,  네트워크, 소프트웨어, 비즈니스 프로세스에 걸쳐 업계 최고의 기능을 제공합니다. 
 세계적인 수준의 강력한 보안을  자랑하는 AWS 데이터 센터는 최첨단 전자식 감시 시스템과 멀티 팩터 액세스 제어 시스템을 사용합니다. 
 데이터  센터에는 숙련된 보안 경비가 연중무휴 대기하며 액세스에  대한 권한은 최소한의 특권을 기준으로 엄격하게 부여됩니다. 
 환경 시스템은 환경 파괴가 운영에 미치는 영향을  최소화하도록 설계되었습니다. 
 여러 지리적 리전 및 가용  영역을 사용하면 자연재해나 시스템 장애 등 대부분 장애 모드에서도 시스템을 유지할 수 있습니다. 
AWS 자산은  프로그래밍 가능한 리소스이므로, 인프라 설계 시 고객의  보안 정책을 수립하여 포함시킬 수 있습니다. 
 AWS 사용하면  고객이 대부분의 비즈니스 요구를 해결할 수 있는  안정적인 고성능 솔루션을 개발하도록 도울 수 있습니다. 
  전 세계를 대상으로 미디어 서비스를 제공하든, 널리  분산된 인력의 의료 기기를 관리하든 관계 없이 AWS는  고객에게 신속하고 저렴한 비용으로 솔루션을 구현할 수  있는 도구를 제공합니다. 
 AWS에서의 안정성은 시스템이 인프라 또는 서비스 장애를 복구하는 능력으로 정의됩니다. 
 또한 수요에 따라 컴퓨팅 리소스를 탄력적으로 확보하고 중단 사태를 완화할 수 있는 능력에 초점을 맞춥니다. 
안정성을 실현하기 위해서는 아키텍처 및 시스템이 수요 변동을  처리하고 장애를 감지하여 자동으로 처리할 수 있는 잘 계획된 토대를 기반으로 해야 합니다. 
AWS를 사용하는  조직들은 하드웨어 수요 예측의 불확실성을 줄여서 향상된  유연성과 용량을 달성할 수 있습니다. 
 뿐만 아니라, AWS는 온프레미스 솔루션이 따라오지 못할 수준 용량과 안정성을 고객에게 제공합니다. 
데이터 센터 구축 사업을 하는 경우를  제외하고는 여러분은 아마 지금까지 데이터 센터 구축에 너무 많은 시간과 비용을 소비했을 것입니다. 
 AWS에서는 서버나 소프트웨어 라이선스 구매 또는 시설 임대를  비롯하여 고가의 인프라 구축에 소중한 리소스를 쏟아부을  필요가 없습니다. 
필요한 만큼만 서비스 요금을 지불함으로써  혁신 및 발명에 집중할 수 있으므로 조달 복잡성을  줄이고 비즈니스에 완전한 탄력성을 부여할 수 있습니다. 
사용한 만큼 지불하는 요금을 통해 예산을 과도하게 할당하지  않고도 변화하는 비즈니스 요구에 손쉽게 적응하고 변화에  대한 대응을 개선할 수 있습니다. 
 종량 과금제 모델에서는  예측치가 아닌 정확한 수요에 따라 비즈니스에 대응할 수 있으므로 위험이나 초과 프로비저닝 또는 누락되는  용량을 줄일 수 있습니다. 
클라우드로 마이그레이션은 더 이상 IT 비용 절감 차원의 문제가 아니라 기업이 번창할 수 있는 환경을 구축하는 문제입니다. 
 디지털 혁신으로 고객을 연결하고 획기적이고 새로운 통찰력과 과학적 혁신 기술을 개발하고 혁신적이고 새로운 제품과 서비스를  제공하는 것이 그 어느 때보다 쉬워졌습니다. 
Amazon Web  Services는 컴퓨팅 , 스토리지  , 데이터베이스 , 분석 , 네트워킹 , 모바일 , 개발자 도구 , 관리 도구 ,  IoT , 보안 및 엔터프라이즈  애플리케이션을 비롯해 광범위한 글로벌 클라우드 기반 제품을 제공합니다. 
 이러한 서비스를 사용하면 조직이 더 빠르게 움직이고, IT 비용을 낮추며, 확장할 수 있습니다. 
  AWS는 세계 최대 기업 및 전 세계의 주목을 받고  있는 스타트업에서 웹 및 모바일 애플리케이션, 게임 개발 , 데이터 처리 및 웨어하우징, 스토리지, 아카이브 등을 비롯한 다양한 워크로드를 강화시키는 것으로 인정을  받았습니다. 
AWS 클라우드를 사용하면 높은 비용과 장기 계약 같은 혁신의 장애물을 제거하고 AWS 서비스와 광범위한 파트너 에코시스템, 지속적인 혁신을 활용하여 비즈니스  솔루션을 추진하고 비즈니스를 성장시킬 수 있습니다. 
 글로벌 입지와 비즈니스 혁신을 뒷받침하는 기술을 개발할 수  있는 전문성을 갖추고 있다는 점에서 AWS를 믿고 비즈니스  성공에 도움이 되는 솔루션을 제공해 보십시오. 
Amazon Web  Services 교육 및 자격증의 Jody Soeiro de  Faria였습니다

 



* 핵심 서비스


- 서비스 범주 및 소개

안녕하십니까? 저는 Amazon Web Services(AWS) 교육 및 자격증 팀의 Mike Blackmer라고 합니다. 
 이 모듈에서는 AWS의 서비스와 범주에 관해 다루고 AWS 설명서에 대해서도 알아볼 것입니다. 
 AWS는 일반적인 클라우드 아키텍처를 위한 빌딩 블록으로 활용할 수 있는 광범위한 글로벌 클라우드 기반 제품을 공급합니다. 
 각 제품마다 다양한 서비스를 제공합니다. 
 이 모듈에서 설명할 범주는 컴퓨팅, 스토리지, 데이터베이스, 네트워킹, 보안 등입니다. 
 그럼 이들 범주를 하나씩 살펴보겠습니다. 
 브라우저를 열고 aws.amazon.com에 접속합니다. 
 이것은 AWS 웹 사이트의 프런트 페이지입니다. 
 스크롤 바를 사용하여 약간 아래로 내려가면, AWS 제품 살펴보기라고 하는 섹션이 나오는데, 모든 제품 및 서비스가 여러 범주별로 배치되어 있습니다. 
 예를 들어 컴퓨팅을 클릭하면 목록의 처음에 Amazon EC2가 있음을 확인할 수 있는데, 다수의 다른 제품 및 서비스도 컴퓨팅 범주에 등재되어 있습니다. 
 Amazon EC2를 클릭하면 Amazon EC2 메인 페이지가 나타나는데, URL은 http://aws.amazon.com/EC2입니다. 
 메인 페이지에는 제품에 관한 소개, 상세 설명 및 몇 가지 이점이 수록되어 있습니다. 
 이에 더해 제품 세부 정보, 인스턴스 유형, 요금, 시작하기, FAQ 및 기타 리소스도 확인할 수 있습니다. 
 제품 세부 정보를 클릭하면 Amazon EC2의 기능에 관한 상세 정보가 표시됩니다. 
 프런트 페이지로 돌아가서 스토리지를 클릭하면 스토리지 아래에서 Amazon S3, Amazon EBS 등을 확인할 수 있습니다. 
 기타 스토리지 옵션도 여기에 표시됩니다. 
 데이터베이스에서는 Aurora, Amazon RDS, Amazon DynamoDB, Amazon Redshift 및 기타 옵션을 확인할 수 있습니다. 
 Amazon VPC는 컴퓨팅에 표시되는데, 컴퓨팅 리소스를 격리하기 위해 필요한 구성 요소이기 때문입니다. 
 네트워킹 및 콘텐츠 전송에서 확인할 수 있습니다. 
 보안, 자격 증명 및 규정 준수로 가면 AWS Identity & Access Management를 확인할 수 있습니다. 
 클릭하면 좀 더 자세한 정보가 표시됩니다. 
 이제 설명서에 대해 얘기하고자 합니다. 
 설명서 부분은 정말 잘 문서화되어 있습니다. 
 상단으로 스크롤 바를 이동하여 제품을 탐색하지 않고 바로 설명서 섹션으로 이동할 수 있습니다. 
 어떤 제품이든 최종 사용자용 설명서를 얻을 수 있습니다. 
 알아차리셨는지 모르겠지만, 콘솔에 로그인하지 않은 상태입니다. 
 따라서 Amazon EC2 퍼블릭 사용자로 액세스하겠습니다. 
 액세스하면 Linux용 사용 설명서, Windows용 사용 설명서, API 참조, AWS CLI 참조 등을 포함한 제반 설명서를 이용할 수 있습니다. 
 이곳은 정말 굉장합니다. 
 AWS에서 보유하고 있는 거의 모든 것을 여기에서 이용할 수 있습니다. 
 설명서 섹션에서 제품, 스토리지, S3 설명서를 선택하여 시작 안내서, 개발자 안내서 등을 선택할 수 있습니다. 
이 프레젠테이션에서는 AWS에서 이용 가능한 제품 및 서비스의 범주를 소개하고, http://aws.amazon.com에서 더 많은 정보를 확인하는 방법을 설명했습니다. 
 지금까지 AWS 교육 및 자격증 팀의 Mike Blackmer였습니다. 
 
 - AWS 글로벌 인프라
안녕하세요. 
 저는 AWS(Amazon Web Services) 교육 및 자격증 팀에서 활동 중인 Anna Fox라고 합니다. 
오늘 강의에서는 글로벌 인프라(global infrastructure)라고도 하는 AWS 호스팅에 대해 간략히 소개하겠습니다. 
 AWS의 글로벌 인프라는 크게 3가지 주제, 즉 리전, 가용 영역(AZ) 및 엣지 로케이션으로 구분할 수 있습니다. 
먼저 리전(regions)에 대해 알아보기로 하겠습니다. 
 지금 보고 계신 화면은 AWS 홈페이지입니다. 
 여기서 화면 중앙으로 마우스를 스크롤하면 여러 가지 리전 및 엣지 로케이션으로 구성된 글로벌 네트워크 지도가 나타납니다. 
이쯤에서 여러분 중 어떤 분은 리전이 정확히 무엇인지 궁금할 것입니다. 
 리전은 2개 이상의 가용 영역(AZ)을 호스팅하는 지리 영역을 가리킵니다. 
 사용자 정의 서비스 및 기능들을 구축하고 선택할 경우, 사용 중인 정보를 저장할 지리 영역을 선택할 기회가 있습니다. 
 해당 영역을 선택할 때는 어떤 영역이 비용을 절감하고 규제 요구 사항들을 준수하면서 지연 시간을 최적으로 조정하는 데 도움이 되는지를 고려하는 것이 중요합니다. 
 이번 시간에는 바로 이 점에 대해 좀 더 자세히 알아보기로 하겠습니다. 
 클라우드 컴퓨팅 서비스를 활용할 경우, 애플리케이션을 여러 리전에 쉽게 배포할 수 있습니다. 
 예를 들어, 본사에서 가장 가까운 한 리전(샌디에이고 등)에서 어떤 애플리케이션을 보유할 수 있으며, 그렇다면 미국 동부 해안 지역의 한 리전에서 배포 가능한 애플리케이션을 보유할 수도 있습니다. 
 이제 가장 큰 고객 기반이 미국 버지니아 주에 위치한다고 가정해 봅시다. 
 고객들에게 더 나은 환경을 제공하기 위해 몇 번의 마우스 클릭만으로 미국 동부의 해당 리전에 애플리케이션을 쉽게 배포할 수 있습니다. 
 불과 몇 분만에 최소 비용으로 지연 시간을 최소화하고 조직의 민첩성을 향상시킬 수 있습니다!리전은 서로 완전히 독립된 엔터티(entity)이며, 한 리전의 리소스는 다른 리전으로 자동 복제되지 않습니다. 
 이제 리전 테이블의 특정 영역에서 어떤 서비스가 제공되는지 살펴보겠습니다. 
 자세히 알아보기(Learn More) 링크가 보일 것입니다. 
 이 링크로 접근해 마우스로 클릭합니다. 
 이제 글로벌 인프라(Global Infrastructure) 페이지가 나타납니다. 
마우스로 스크롤해 보겠습니다. 
 모든 AWS 로케이션에서 제공되는 서비스의 상세 정보를 나타내는 링크가 보일 것입니다. 
 이 링크를 클릭하면 AWS의 리전 테이블이 나타납니다. 
 여기서는 미주, 유럽/중동/아프리카(EMEA) 및 아시아 태평양 지역을 자세히 살펴볼 수 있습니다. 
 또한 이 테이블을 특정 위치로 더욱 세분화할 수 있다는 점을 확인할 수 있으며, 그곳에 어떤 서비스가 제공되는지도 알 수 있습니다. 
 다음은 가용 영역(AZ)에 대해 설명해 보겠습니다. 
가용 영역(AZ)이란 특정 리전 내에 존재하는 데이터 센터들의 모음을 의미합니다. 
 가용 영역들은 서로 격리되어 있으며 다만 빠르고 지연 시간이 짧은 링크에 의해 함께 연결됩니다. 
 그렇다면 가용 영역을 격리하면서도 연결하는 데 따른 이점은 무엇일까요?공통의 장애 지점들이 발생할 때 이러한 장애 지점은 서로 격리되어 있는 모든 가용 영역에 대해 영향을 미치지 않습니다. 
 가용 영역들은 어떻게 격리되나요?각 가용 영역은 물리적으로 구분된 독립적 인프라에 속합니다. 
 또한 가용 영역들은 물리적, 논리적으로 분리되어 있습니다. 
 각 영역(AZ)은 별도의 무정전 전원 공급 장치(UPS), 현장 예비 발전기, 냉각 장비, 네트워킹 및 연결 수단을 자체적으로 갖추고 있습니다. 
 가용 영역들은 모두 독립적인 전력 회사의 서로 다른 전력망을 통해 전력이 공급되며, 여러 티어1 전송 서비스 공급자를 통해 연결됩니다. 
 AZ를 서로 격리하면 한 AZ의 장애로부터 다른 AZ를 보호할 수 있으며, 다른 AZ는 요청을 처리할 수 있습니다. 
AWS의 모범 사례에 따르면 다중 AZ에 걸쳐 데이터를 프로비저닝하는 것이 좋습니다. 
 마지막으로 엣지 로케이션에 대해 알아보겠습니다. 
 AWS 엣지 로케이션은 Amazon CloudFront라고 하는 CDN(콘텐츠 전송 네트워크)을 호스팅합니다. 
 Cloudfront는 콘텐츠를 고객들에게 전송하는 데 사용됩니다. 
 콘텐츠에 대한 요청이 가장 가까운 엣지 로케이션으로 자동 라우팅되므로 콘텐츠가 더욱 빨리 최종 사용자에게 전송됩니다. 
  여러 엣지 로케이션과 리전으로 구성된 글로벌 네트워크를 활용하면 보다 빠른 콘텐츠 전송에 액세스할 수 있습니다. 
 엣지 로케이션은 대체로 리전 및 가용 영역(AZ)들과 비슷하게 인구 밀도가 높은 지역에 위치합니다. 
 로케이션의 전체 목록은 <http://aws.amazon.com/cloudfront/details>를 방문해 확인하시기 바랍니다. 
오늘 강의에서 배운 내용을 다시 살펴보겠습니다. 
 이번 시간에는 리전, 가용 영역 및 엣지 로케이션으로 구성되는 AWS의 글로벌 인프라에 대해 소개했습니다. 
 리전은 2개 이상의 가용 영역(AZ)으로 구분된다는 점도 간략히 설명했습니다. 
또한 가용 영역은 하나의 리전에 존재하는 데이터 센터들의 모음을 의미합니다. 
 마지막으로, 엣지 로케이션은 고객들에게 콘텐츠를 전송하기 위해 콘텐츠 전송 네트워크를 호스팅합니다. 
 오늘 다룬 주제들에 관한 자세한 내용은 <http://aws.amazon.com>에서 확인하시기 바랍니다. 
 저는 AWS(Amazon Web Services) 교육 및 자격증 팀 담당자인 Anna Fox였습니다. 
 
 - Amazon Virtual Private Cloud (VPC)
 
 안녕하세요. 
 저는 이번 모듈의 강사인 Kent Rademacher입니다. 
 저는 현재 AWS의 수석 기술 강사로서 AWS 기반 아키텍처 설계 및 AWS 기반 시스템 운영을 가르치고 있습니다. 
 이 모듈에서는 Amazon VPC(Virtual Private Cloud)에 대해 배우게 됩니다. 
 먼저 이 서비스를 소개한 다음, Amazon VPC의 기능을 살펴보기로 하겠습니다. 
 그런 다음, 앞서 설명했던 기능들을 활용하여 Amazon VPC 구성 예제를 살펴 보기로 하겠습니다. 
 마지막으로 Amazon VPC에 대한 추가 학습을 위해 다음 단계를 간략히 요약 및 설명해보기로 하겠습니다. 
 AWS 클라우드는 종량 과금제 방식의 주문형 컴퓨팅 및 관리형 서비스를 제공하며, 이들 서비스는 모두 웹을 통해 액세스할 수 있습니다. 
 이러한 컴퓨팅 리소스 및 서비스는 친숙한 네트워크 구조로 구현된 일반 IP 프로토콜을 통해 액세스할 수 있어야 합니다. 
 고객은 네트워킹 모범 사례를 준수하고 규제 및 조직상의 요구 사항들도 충족해야 합니다. 
 Amazon VPC(Virtual Private Cloud)는 사용자의 네트워킹 요구 사항들을 충족할 네트워킹 AWS 서비스입니다. 
 Amazon VPC를 사용하면 온프레미스 네트워크와 동일한 여러 가지 개념 및 구성을 사용하는 AWS 클라우드 내 프라이빗 네트워크를 생성할 수 있으며, 나중에 살펴보겠지만 제어, 보안 및 유용성을 저해하지 않고도 네트워크 설정의 복잡성이 상당 부분 추상화되었습니다. 
 Amazon VPC는 네트워크 구성을 완벽하게 제어합니다. 
 고객은 IP 주소 공간, 서브넷 및 라우팅 테이블과 같은 일반 네트워킹 구성 항목들을 정의할 수 있습니다. 
 이를 통해 인터넷에 노출되는 항목과 Amazon VPC 내에서 격리되는 항목을 각각 제어할 수 있습니다. 
 Amazon VPC는 네트워크의 보안 제어를 계층화하기 위한 방편으로 배포할 수 있습니다. 
 이는 서브넷 격리, 액세스 제어 목록 정의 및 라우팅 규칙 사용자 지정을 포함합니다. 
 수신 트래픽과 송신 트래픽을 모두 허용 및 거부하도록 완벽하게 제어할 수 있습니다. 
 마지막으로 Amazon VPC에 배포되는 AWS 서비스는 수없이 많으며, 이들 서비스는 클라우드 네트워크에 구축된 보안을 상속하고 활용합니다. 
 Amazon VPC는 AWS 기초 서비스로서 수많은 AWS 서비스와 통합됩니다. 
 예를 들면, Amazon EC2(Elastic Cloud Compute) 인스턴스는 Amazon VPC에 배포됩니다. 
 마찬가지로 Amazon RDS(Relational Database Service) 데이터베이스 인스턴스는 사용 중인 VPC에 배포되는데, 여기서 데이터베이스는 온프레미스 네트워크와 똑같은 네트워크 구조를 통해 보호됩니다. 
 Amazon VPC를 이해하고 이를 구현하면 다른 AWS 서비스를 충분히 활용할 수 있습니다. 
 이제 Amazon VPC의 기능들을 살펴보기로 하겠습니다. 
 Amazon VPC는 리전 및 가용 영역의 AWS 글로벌 인프라를 기반으로 하며, 이 VPC를 통해 AWS 클라우드에서 제공하는 높은 가용성을 쉽게 활용할 수 있습니다. 
 Amazon VPC는 리전 내에 있으며 여러 가용 영역에 걸쳐 확장할 수 있습니다. 
 각 AWS 계정은 제반 환경을 분리하는 데 사용할 수 있는 다중 VPC를 생성할 수 있습니다. 
 VPC는 여러 서브넷에 의해 분할되는 하나의 IP 주소 공간을 정의합니다. 
 이러한 서브넷들은 가용 영역 내에 배포되기 때문에 VPC는 가용 영역을 확장합니다. 
 하나의 VPC에서 많은 서브넷을 생성할 수 있는 반면, 네트워크 토폴로지의 복잡성을 제한하기 위해 권장되는 서브넷들은 비교적 수가 적은 편이지만 이는 전적으로 사용자에게 달려 있습니다. 
 서브넷과 인터넷 사이의 트래픽을 제어하기 위해 서브넷에 대한 라우팅 테이블을 구성할 수 있습니다. 
 기본적으로 VPC 내 모든 서브넷은 서로 통신할 수 있습니다. 
 서브넷은 일반적으로 퍼블릭(public) 또는 프라이빗(private)으로 분류되는데, 퍼블릭 서브넷은 인터넷에 직접 액세스할 수 있지만 프라이빗 서브넷은 인터넷에 직접 액세스할 수 없다는 차이점이 있습니다. 
 서브넷을 퍼블릭으로 설정하려면 인터넷 게이트웨이를 VPC에 연결하고 퍼블릭 서브넷의 라우팅 테이블을 업데이트하여 외부로 가는 트래픽을 인터넷 게이트웨이로 전송해야 합니다. 
 Amazon EC2 인스턴스 역시 인터넷 게이트웨이로 라우팅하려면 퍼블릭 IP 주소가 필요합니다. 
 이제 컴퓨팅 리소스 및 AWS 서비스의 배포를 시작하는 데 사용할 수 있는 Amazon VPC 예제를 설계해 보겠습니다. 
 높은 가용성을 지원하고 여러 서브넷을 사용하는 네트워크를 생성해 보겠습니다. 
 먼저 VPC 또는 리전을 기반으로 하는 이상, 하나의 리전을 선택해야 합니다. 
 저는 오리건 리전(Oregon Region)을 선택했습니다. 
 그런 다음, VPC를 생성해 보겠습니다. 
 이제 이 VPC의 이름을 Test VPC로 정하고 이 VPC에 대한 IP 주소 공간을 정의해 보겠습니다. 
 10.0.0.0/16은 CIDR(Classless Inter-Domain Routing) 형식에 해당되는데, 이는 VPC에서 사용할 IP 주소가 65,000개가 넘는다는 것을 의미합니다. 
 그런 다음, Subnet A1이라는 이름의 서브넷을 생성합니다. 
 256개의 IP 주소를 포함하는 하나의 IP 주소 공간을 할당했습니다. 
 또한 이 서브넷이 가용 영역(AZ) A에서 실행될 것임을 지정합니다. 
 그런 다음, Subnet B1이라는 이름의 또 다른 서브넷을 생성했으며 하나의 IP 주소 공간을 할당합니다. 
 다만 이 주소 공간은 512개의 IP 주소를 포함합니다. 
 이제 Test IGW라는 이름의 인터넷 게이트웨이를 추가했습니다. 
 Subnet A1은 인터넷 게이트웨이를 통해 외부로 가는 트래픽이 라우팅되는 퍼블릭 서브넷이 됩니다. 
 Subnet B1은 인터넷에서 격리된 프라이빗 서브넷이 됩니다. 
 지금까지 수행한 실습 내용을 요약한 다음, 다음 단계를 살펴보기로 하겠습니다. 
 지금까지 VPC, 인터넷 게이트웨이 및 서브넷을 생성하는 방법에 대해 알아보았습니다. 
 다음 단계에서는 라우팅 테이블, VPC 엔드포인트 및 피어링 연결 등 그 밖의 VPC 기능들에 대해 알아보기로 하겠습니다. 
 또한 AWS 리소스를 VPC에 배포하는 방법에 대해서도 확인할 수 있습니다. 
 자세한 내용은 AWS.amazon.com/VPC를 참조하십시오. 
 이 과정이 약간이라도 도움이 되었기를 바라며 계속해서 다른 동영상을 학습하시기 바랍니다. 
 이것으로 강의를 마칩니다. 
 저는 AWS 교육 및 자격증 팀 담당자 Kent Rademacher였습니다. 
 시청해 주셔서 감사합니다. 
 
 - 보안그룹
 안녕하세요. 
 저는 AWS(Amazon Web Services) 교육 및 자격증 팀에서 활동 중인 Anna Fox라고 합니다. 
오늘 강의에서는 AWS Security Groups에 대해 간략히 소개하겠습니다. 
AWS 클라우드의 보안은 AWS(Amazon Web Services)의 최우선 사항 중 하나에 속하며, 저희 AWS는 AWS Cloud에서 데이터를 보호하는 데 도움이 될 여러 가지 강력한 보안 옵션을 제공합니다. 
이번 시간에 제가 이야기하고 싶은 기능 중 한 가지는 보안 그룹(security groups)입니다. 
AWS에서 보안 그룹은 사용자의 가상 서버를 위한 내장 방화벽처럼 작동합니다. 
 이들 보안 그룹을 사용하면 인스턴스에 대한 접근성을 완벽하게 제어할 수 있습니다. 
 이는 가장 기초적인 수준에서 인스턴스에 대한 트래픽을 필터링하는 또 다른 방법에 불과합니다. 
 이러한 방법을 활용하면 어떤 트래픽을 허용 또는 거부할 것인지를 제어할 수 있습니다. 
 사용자의 인스턴스에 액세스할 권한이 있는 자를 결정하기 위해 보안 그룹 규칙을 구성합니다. 
 이 규칙들은 해당 인스턴스를 100% 프라이빗 또는 퍼블릭 상태로 유지하거나 혹은 그 중간 수준의 상태로 유지하는 등 다양하게 구성할 수 있습니다. 
여기서는 전형적인 AWS 멀티티어 보안 그룹의 예를 볼 수 있습니다. 
 이 아키텍처에서는 이러한 멀티티어 웹 아키텍처를 수용하기 위해 다양한 보안 그룹 규칙들이 생성되었음을 알 수 있습니다. 
 웹 티어에서 시작할 경우, 0.0.0.0/0의 소스를 선택하면 포트 80/443에서 인터넷의 모든 사이트로부터 발신된 트래픽을 허용하는 하나의 규칙을 설정했음을 알게 됩니다. 
그런 다음, 애플리케이션 티어로 이동하면 웹 티어에서 발신된 트래픽만 허용하는 하나의 보안 그룹이 있으며, 이와 마찬가지로 데이터베이스 티어는 애플리케이션 티어에서 발신된 트래픽만 허용할 수 있습니다. 
 마지막으로, SSH 포트 22를 통해 기업 네트워크에서 원격으로 관리를 허용할 목적으로 생성된 규칙도 있음을 알 수 있습니다. 
 이제 하나의 보안 그룹을 생성하는 과정을 살펴보기로 하겠습니다. 
 지금 저는 AWS 관리 콘솔에 로그인하고 있는데 EC2를 클릭해 보겠습니다. 
 탐색 창을 열면 Network & Security 아래로 Security Groups가 보입니다. 
 이것을 클릭해 보겠습니다. 
 그러면 해당 계정에 속한 일련의 보안 그룹들이 목록으로 나타납니다. 
 보안 그룹을 생성하려면 Create Security Group을 클릭해야 합니다. 
 팝업 창이 나타납니다. 
 이 창에서는 이름과 설명을 작성하여 이를 소스에 연결할 수 있습니다. 
  그런 다음, 여기서 해당 규칙으로 내려가면 모든 인바운드 트래픽은 기본적으로 DENIED(거부됨)로 설정되며 모든 아웃바운드 트래픽은 ALLOWED(허용됨)로 설정되는 것을 알 수 있습니다. 
 이를 편집하고 싶다면 여기서 인바운드 탭과 아웃바운드 탭을 각각 클릭해 규칙을 수정하면 됩니다. 
 트래픽 유형, 프로토콜, 포트 범위 및 소스별로 편집할 수 있습니다. 
 이때, 사용 중인 인스턴스에 필요한 트래픽이 무엇인지를 파악하고 그 트래픽만 특별히 허용하는 것이 가장 좋은 방법입니다. 
 잘 하셨습니다! 오늘 강의에서 배운 내용을 다시 살펴보겠습니다. 
 AWS는 1개 이상의 인스턴스에 대한 트래픽을 제어할 수 있는 가상 방화벽(이른바 '보안 그룹'이라고 함)을 제공합니다. 
 보안 그룹 규칙을 생성하면 인스턴스에 대한 접근성을 제어할 수 있습니다. 
 이러한 보안 그룹은 AWS 관리 콘솔에서 관리할 수 있습니다. 
보안 그룹에 관한 자세한 내용은 <http://aws.amazon.com>에서 확인하시기 바랍니다. 
 저는 AWS(Amazon Web Services) 교육 및 자격증 팀 담당자인 Anna Fox였습니다.
 

 

 


 - 컴퓨팅 서비스
 AWS 컴퓨팅 서비스에 대한 소개를 시작하겠습니다. 
 저는 이곳 Amazon Web Services에서 기술 프로그램 관리자를 맡고 있는 Allen Goldberg라고 합니다. 
 비즈니스를 구축하고 운영하는 일은 인간 게놈의 염기서열을 분석하기 위해 모바일 앱을 구축하거나 거대한 클러스터를 실행하는지 여부에 관계없이 컴퓨팅으로 시작됩니다. 
 AWS는 다양한 컴퓨팅 서비스 카탈로그를 제공합니다. 
 단순한 애플리케이션 서비스에서부터 유연한 가상 서버는 물론, 심지어는 서버리스 컴퓨팅에 이르기까지 모든 서비스를 제공합니다. 
 이 동영상에서는 AWS의 컴퓨팅 서비스를 소개하고자 합니다. 
 여러 서버를 온프레미스로 운영할 경우, 많은 비용이 소요됩니다. 
 하드웨어는 실제 사용량이 아닌, 프로젝트 계획에 따라 확보해야 할 때가 많습니다. 
 데이터 센터는 구축, 인원 배치 및 유지 보수 시 많은 비용이 소요됩니다. 
 최악의 경우에는 리소스를 프로비저닝해야 합니다. 
 사용 중인 서버는 트래픽 급증 및 이벤트를 처리할 수 있어야 합니다. 
 일단 구축이 완료되면 용량이 유휴 상태가 되는 경우가 많습니다. 
 AWS는 유연성과 비용 효율성을 제공합니다. 
 AWS를 사용하면 컴퓨팅 요구를 워크로드에 맞게 조정할 수 있습니다. 
 확장성은 컴퓨팅 서비스에 내장되어 있기 때문에 수요가 증가하면 쉽게 확장할 수 있습니다. 
 수요가 감소할 경우(예를 들면, 야간 또는 주말), 애플리케이션 규모를 축소하여 비용과 리소스를 절감할 수 있습니다. 
 사용하지 않는 애플리케이션에 비용을 들일 필요가 없습니다. 
 컴퓨팅 요구는 시간이 지남에 따라 변동할 수 있는데, 예를 들면 AWS의 Amazon EC2 서비스는 단순한 웹 서버에서부터 대규모 기계 학습 클러스터에 이르기까지 모든 유형에 적합한 다양한 가상 서버 인스턴스 유형을 제공합니다. 
 사용자는 자신이 직접 구입한 특정 하드웨어 구성에 매이지 않고 인스턴스 유형을 쉽게 변경할 수 있습니다. 
 Amazon EC2를 사용하면 여러 애플리케이션을 규모에 관계없이 실행하는 완벽한 유연성을 구현할 수 있습니다. 
 사용 환경을 계속 완벽하게 제어할 수 있으며 온프레미스 환경과는 달리, 온디맨드(On-Demand) 가격을 적용하여 필요에 따라 리소스를 비용 효율적으로 확장 및 축소할 수 있습니다. 
 종종 서버를 실행할 필요가 없습니다. 
 서버를 실행하는 대신, 필요할 때 애플리케이션을 실행할 수 있다면 어떨까요? AWS Lambda를 사용하면 서버를 프로비저닝하거나 관리할 필요 없이 코드를 실행할 수 있습니다. 
 사용한 컴퓨팅 시간에 대해서만 비용을 지불하면 됩니다. 
 코드를 실행하지 않을 때는 비용이 발생하지 않습니다. 
 Lambda에서는 사실상 모든 유형의 애플리케이션 또는 백엔드 서비스(예: 모바일, 사물 인터넷(IoT), 스트리밍 서비스)에 대한 코드를 별도의 관리 없이 실행할 수 있습니다. 
 이를테면 업로드된 이미지를 처리하는 경우를 예로 들 수 있습니다. 
 이 이미지를 Amazon S3에 업로드하고 이벤트 트리거를 사용하면 유휴 서버를 굳이 대기시키지 않고도 Lambda 함수를 시작하여 이미지를 처리할 수 있습니다. 
 서버를 프로비저닝하고 유지 관리할 필요 없이 컴퓨팅을 실행하는 경우를 생각해 보십시오. 
 간단한 웹사이트 또는 전자 상거래 애플리케이션을 실행해야 할 경우, AWS는 Amazon Lightsail을 제공합니다. 
 Lightsail을 사용하면 하나의 가상 프라이빗 서버(virtual private server)를 단 몇 분만에 시작할 수 있으며, 간단한 웹 서버 및 애플리케이션 서버를 쉽게 관리할 수 있습니다. 
 Lightsail은 저렴하면서도 예측 가능한 가격으로 프로젝트를 활성화하는 데 필요한 모든 것(가상 머신, SSD 기반 스토리지, 데이터 전송, DNS 관리 및 정적 IP 주소 등)을 포함하고 있습니다. 
 컨테이너 서비스를 온프레미스로 사용합니까? Amazon ECS(Elastic Container Service)는 Docker 컨테이너를 지원하는 확장성과 성능이 뛰어난 컨테이너 관리 서비스이며, 이 서비스를 사용하면 Amazon EC2 인스턴스의 관리형 클러스터에서 애플리케이션을 손쉽게 실행할 수 있습니다. 
 Amazon ECS를 사용하면 자체적 클러스터 관리 인프라를 설치, 운영 및 확장할 필요가 없습니다. 
 AWS는 여러 가지 컴퓨팅 제품을 제공하며 이를 통해 애플리케이션을 가상 서버나 컨테이너 또는 코드로 배포, 실행 및 확장할 수 있습니다. 
 AWS는 배치 프로세싱을 자동화 및 확장하고 웹 애플리케이션을 실행 및 관리하며 가상 네트워크를 생성하기 위한 서비스를 갖추고 있습니다. 
 AWS 컴퓨팅 서비스에 관한 추가 정보는 AWS.Amazon.com/products/compute를 참조하시기 바랍니다. 
 또한 AWS는 각 서비스에 대한 세부 정보를 확인할 수 있는 일련의 서비스 수준 소개도 제공합니다. 
 
 - Amazon Elastic Compute Cloud (EC2)
 안녕하세요. 
 저는 Mike Blackmer라고 합니다. 
 저는 AWS 교육 및 자격증(Training and Certification) 팀에서 교육 과정 개발 업무를 담당하고 있습니다. 
 Amazon EC2 개요를 발표하겠습니다. 
 먼저 해당 제품에 대한 몇 가지 기본적인 사실들을 제시한 다음, Amazon EC2 인스턴스를 구축 및 구성하는 방법을 보여주는 데모를 소개해 보겠습니다. 
 EC2란 무엇일까요? EC2는 Elastic Compute Cloud의 약어입니다. 
 여기서 Compute(컴퓨팅)란 제공 중인 컴퓨팅이나 서버, 리소스 등을 가리킵니다. 
 서버를 이용해 수행할 수 있는 일 중에는 재미있고 흥미진진한 것들이 많습니다. 
 또한 Cloud(클라우드)란 이러한 요소들이 클라우드에서 호스팅하는 컴퓨팅 리소스에 해당된다는 사실을 의미합니다. 
 첫 번째 단어인 Elastic(탄력적)은 서버를 올바르게 구성할 경우, 하나의 애플리케이션에 대한 현재의 수요에 따라 이 애플리케이션에서 필요한 서버의 수량을 자동으로 증감할 수 있다는 사실을 의미합니다. 
 이제 이러한 요소들을 더 이상 '서버'라 부르지 말고 그 대신에 Amazon EC2 인스턴스라는 올바른 이름을 사용해 보겠습니다. 
 인스턴스는 종량 과금제 방식으로 요금이 부과됩니다. 
 즉, 실행 중인 인스턴스 및 이러한 인스턴스를 실행 중인 시간에 한해서만 요금을 지불합니다. 
 여기서는 다양한 하드웨어 및 소프트웨어를 선택할 수 있으며, 인스턴스를 호스팅할 위치도 선택할 수 있습니다. 
 Amazon EC2는 이보다 더 많은 것을 포함하고 있습니다. 
 자세한 내용은 AWS.amazon.com/ec2를 참조하십시오. 
 이제 EC2 인스턴스를 구축 및 구성하는 방법을 시연해 보겠습니다. 
 또한 시연 과정에서 지금까지 다룬 주제들에 대해 좀 더 자세히 알아보기로 하겠습니다. 
 시연 중에는 AWS 콘솔에 로그인하여 인스턴스를 호스팅할 하나의 리전을 선택하고 EC2 마법사를 시작하며 AMI(Amazon Machine Image)를 선택하여 AWS 인스턴스에 대한 소프트웨어 플랫폼을 제공합니다. 
 그런 다음, 하드웨어 용량을 나타내는 인스턴스 유형을 선택합니다. 
 이어서 네트워크와 스토리지를 차례대로 구성하고 마지막으로 키 페어를 구성하면 해당 인스턴스를 시작한 후에 인스턴스에 연결할 수 있습니다. 
 저는 이미 콘솔에 로그인한 상태입니다. 
 먼저 EC2 인스턴스가 호스팅되는 해당 리전을 선택해 보겠습니다. 
 이제 리전은 인근 지역인 오리건(Oregon)으로 설정되었습니다. 
 드롭다운 목록을 클릭하면 다른 리전을 선택할 수도 있는데, 저는 리전을 변경하지 않고 계속 오리건으로 설정해 보겠습니다. 
 이제 계속 진행하여 Services를 클릭해 보겠습니다. 
 EC2를 클릭한 다음, Launch Instance를 클릭합니다. 
 첫 번째 선택 기준은 AMI(Amazon Machine Image)인데 이는 인스턴스가 시작될 때 인스턴스와 함께 발생하는 소프트웨어 로드를 가리킵니다. 
 Quick Start는 다양한 Linux 및 Windows 서버의 목록을 제시합니다. 
 또한 나만의 서버를 구축했다면 타사 이미지와 My AMIs를 포함하는 마켓플레이스도 있습니다. 
 여기서는 Amazon Linux AMI를 선택해 보겠습니다. 
 다음 화면에서는 하드웨어를 선택할 수 있는 목록이 나타납니다. 
 이 목록에 나열된 하드웨어들을 일컬어 인스턴스 유형이라고 하며 아래로 스크롤하면 8코어, 32GB의 메모리, 64개의 코어를 시작으로 일련의 유형들을 확인할 수 있습니다. 
 여기서는 매우 다양한 유형들이 존재합니다. 
 시연용 로우 엔드(low-end) 유형들 중 T2 Micro 인스턴스 유형을 선택해 보겠습니다. 
 다음은 Configure Instance Details를 선택해 보겠습니다. 
 동일한 하드웨어 및 소프트웨어 빌드를 공유할 수많은 이미지들을 선택적으로 생성할 수 있습니다. 
 생성 가능한 이미지의 개수는 10만 개로 제한되는 것 같습니다. 
 아시나요? 저는 지금 일자리를 유지하고 싶기 때문에 한 가지를 선택해 보겠습니다. 
 하나의 인스턴스를 구축해 보겠습니다. 
 아래로 스크롤하면 여기서는 네트워크 구성이 진행되며, 여기서는 기본값, 즉 기본 VPC(Virtual Private Cloud), 기본 서브넷 및 기본 자동 할당 설정을 계속 유지하면 DHCP 주소를 얻게 됩니다. 
 아래로 건너뛰면 나머지 모든 항목은 매우 양호한 것으로 나타납니다. 
 그런 다음, 스토리지를 추가해 보겠습니다. 
 이제 루트 볼륨의 크기를 12GB로 확대할 수 있습니다. 
 디스크의 유형을 변경할 수 있습니다. 
 새 볼륨을 추가할 수도 있습니다. 
 흥미로운 점들을 계속 살펴보기 위해 루트 볼륨의 크기를 16GB로 확대해 보겠습니다. 
 인스턴스를 종료하거나 삭제할 경우에는 이 볼륨을 삭제할 필요도 있습니다. 
 그런 다음, 태그를 추가해 보겠습니다. 
 EC2 인스턴스의 경우, 기본적으로 암호를 사용한 식별자 1개가 제공되기 때문에 이 식별자에 대해 친숙한 이름을 붙일 필요가 있는데 Add Tag, Name 및 Value of을 차례대로 클릭합니다. 
 이 식별자의 이름을 EC2 demo로 지정해 보겠습니다. 
 그런 다음, 보안 그룹을 구성해 보겠습니다. 
 보안 그룹은 일련의 방화벽 규칙을 가리킵니다. 
 이것은 SSH 연결을 위한 기본 규칙을 자동으로 생성합니다. 
 간단한 웹 연결을 허용하기 위해 또 다른 규칙을 추가할 수 있습니다. 
 이 규칙을 간단히 SSH GTP라는 이름으로 부르기로 하며, 이제 해당 보안 그룹이 제공하는 것이 무엇인지를 정확히 파악할 수 있습니다. 
 이제 Review and Launch를 클릭합니다. 
 선택한 항목들을 상기시키는 개요가 나타납니다. 
 이 개요의 모든 항목은 계획된 항목처럼 보입니다. 
 이제 Launch를 클릭합니다. 
 SSH를 사용해 시스템에 연결하려면 하나의 키 페어를 생성해야 합니다. 
 때문에 Create a New Key Pair를 클릭하여 해당 키 페어의 이름을 EC2 Demo로 지정한 다음, 프리이빗 키를 다운로드합니다. 
 이 키를 로컬에 저장합니다. 
 SSH를 통해 연결하려면 프라이빗 키가 꼭 필요합니다. 
 이제 매직 버튼을 누르면 해당 인스턴스가 시작됩니다. 
 인스턴스가 성공적으로 시작되었습니다. 
 시작 로그의 항목들은 양호한 것으로 나타납니다. 
 그리고 이 로그에는 암호를 사용한 식별자가 있습니다. 
 이 식별자를 클릭합니다. 
 친숙한 이름이 나타나는데 이는 해당 인스턴스 상태가 보류 중임을 의미합니다. 
 Refresh 버튼을 클릭하면 인스턴스가 실행됩니다. 
 잘 됐네요. 
 EC2 인스턴스 구축이 완료되었다면 이제 이 인스턴스에 액세스해 보겠습니다. 
 이 인스턴스를 강조 표시하면 Description 아래에서 해당 인스턴스의 퍼블릭 DNS 및 퍼블릭 IP 주소를 확인할 수 있습니다. 
 이제 이 DNS 및 주소를 복사하여 PuTTY를 시작하면 기본 사용자는 EC2-user로 설정됩니다. 
 따라서 EC2-user@를 실행합니다. 
 복사한 DNS를 붙여 넣은 다음, Open을 클릭해 보겠습니다. 
 Cache를 클릭해 로컬 키를 캐시에 저장합니다. 
 참! 아직 프라이빗 키를 구성하지 않았기 때문에 이 로컬 키는 작동하지 않습니다. 
 따라서 동일한 정보로 새 세션을 만들어 SSH 및 Auth를 선택하고 해당 프라이빗 키를 찾아봅니다. 
 프라이빗 키를 여기 이 폴더에 저장했는데 지금은 없습니다. 
 Windows 기반의 PuTTY에서는 하나의 PPK 파일이 필요하기 때문에 PuTTYgen이라 하는 또 다른 애플리케이션을 열어야 합니다. 
 Load를 클릭하여 오른쪽 폴더로 이동하면 PEM 파일이 있는지 확인할 수 있으며, 이 파일을 선택한 후 프라이빗 키를 저장합니다. 
 이렇게 하면 프라이빗 키는 PPK 파일로 저장됩니다. 
 이 키는 PuTTY 선택 창 아래에 있습니다. 
 이제 해당 연결을 열면 자동으로 로그인이 실행되며 로그인이 성공한 것으로 나타납니다. 
 이번 데모가 도움이 되셨기를 바랍니다. 
 이것으로 강의를 마칩니다. 
 저는 AWS 교육 및 자격증 팀 담당자 Mike Blackmer였습니다. 
 
 - AWS Lambda
 안녕하세요. 
 저는 AWS 교육 및 자격증 팀 담당자 Ian Falconer입니다. 
 이번 시간에는 AWS Lambda에 대한 입문 과정을 시작하겠습니다. 
 AWS는 이벤트 중심의 서버리스 컴퓨팅 서비스입니다. 
 이번 강의에서는 AWS Lambda에 대해 논의해 보겠습니다. 
 AWS Lambda에 대한 간략한 소개와 서비스 혜택에 대해 살펴본 후, 몇몇 핵심 기능 및 개념들을 좀 더 자세히 다루어 보겠습니다. 
 그런 다음, 이 서비스의 일부 용례들을 살펴보고 마지막으로 이번 강의의 내용을 간략히 요약하는 것으로 마무리해 보겠습니다. 
 AWS Lambda란 무엇일까요? AWS Lambda는 서버를 프로비저닝하거나 관리할 필요 없이 코드를 실행할 수 있는 컴퓨팅 서비스입니다. 
 AWS Lambda는 필요할 때에만 코드를 실행하며 초당 수천 건의 요청으로 자동 확장됩니다. 
 이제 몇 분 동안 이 서비스의 주요 이점들을 살펴보기로 하겠습니다. 
 사용한 컴퓨팅에 대해서만 요금을 지불하면 됩니다. 
 코드가 실행되지 않는 컴퓨팅 시간에 대해서는 비용이 발생하지 않습니다. 
 이 때문에 AWS Lambda는 가변적이면서 단속적인 워크로드에 안성맞춤입니다. 
 이 서비스를 이용하면 거의 모든 애플리케이션 또는 백엔드 서비스에서 별도의 관리 없이 코드를 실행할 수 있습니다. 
 AWS Lambda는 고가용성 컴퓨팅 인프라에서 코드를 실행하며, 서버 및 운영 체제 유지 관리, 용량 프로비저닝, Auto Scaling, 코드 모니터링, 로깅 등 모든 관리 기능을 제공합니다. 
 AWS Lambda는 Node.js, Java, C Sharp 및 Python을 포함한 다양한 종류의 프로그래밍 언어들을 지원합니다. 
 AWS Lambda는 어떻게 사용할 수 있을까요? 이 서비스는 이벤트 중심 컴퓨팅에 사용할 수 있습니다. 
 Amazon S3 버킷 또는 Amazon DynamoDB 테이블의 변경을 포함한 이벤트에 대한 응답으로 코드를 실행할 수 있습니다. 
 Amazon API Gateway를 사용하여 HTTP 요청에 응답할 수 있습니다. 
 AWS SDK를 사용하여 만든 API 호출을 이용해 코드를 호출할 수 있습니다. 
 AWS Lambda 함수에 의해 트리거되는 서버리스 애플리케이션을 구축할 수 있으며, AWS CodePipeline AWS CodeDeploy를 사용하면 이 함수를 자동으로 배포할 수 있습니다. 
 AWS Lambda는 서버리스 및 마이크로 서비스 애플리케이션을 지원하기 위한 서비스입니다. 
 밀결합된 모놀리식 솔루션의 생성을 방지하기 위해 AWS Lambda는 다음과 같은 구성 옵션들을 활용합니다. 
 디스크 공간은 512MB로 제한됩니다. 
 메모리는 128MB에서 1,536MB까지 할당할 수 있습니다. 
 AWS Lambda 함수는 최대 5분까지만 실행됩니다. 
 사용자는 배포 패키지 크기와 파일 기술자의 최대 수에 의해 제약됩니다. 
 요청 및 응답 본문 페이로드는 6MB를 초과할 수 없습니다. 
 이벤트 요청 본문 역시 128kbit로 제한됩니다. 
 동시 실행 횟수는 소프트 한도에 속하며, 요청 시 증가할 수 있습니다. 
 AWS Lambda는 코드가 트리거되는 횟수를 기반으로 하여 실행 시간이 1msec를 경과할 때마다 구축됩니다. 
 Lambda 함수는 매우 간편하게 구축할 수 있습니다. 
 Lambda 환경을 구성한 다음, 코드를 업로드하여 코드 실행 과정을 지켜보면 됩니다. 
 구축 방법은 그만큼 간단합니다. 
 이제 빠른 데모를 진행해 봅시다. 
 이미지 인식 앱을 구축해 보겠습니다. 
 정말 짧고 가벼운 앱 하나를 구축했는데 여기서는 Amazon S3에 하나의 웹사이트를 호스팅했습니다. 
 하나의 이미지를 업로드하면 Lambda 함수가 트리거되는데, 이 함수는 해당 이미지를 처리하고 썸네일을 생성합니다. 
 Lambda 함수는 매우 쉽게 생성됩니다. 
 이것은 AWS Lambda의 Create Function 페이지에서 AWS 콘솔을 보여주는 화면입니다. 
 여기서는 이미 많은 Lambda 함수가 있음을 알 수 있습니다. 
 이제 Check S3 public access(S3 퍼블릭 액세스 확인) Lambda 함수를 살펴보겠습니다. 
 Create Function을 클릭해 Lambda 함수의 이름을 지정하면 지금 보이는 것과 같은 화면이 나타납니다. 
 왼쪽 상단에 Lambda 함수 이름이 지정된 것을 볼 수 있습니다. 
 이제 내 Lambda 함수를 구성해 보겠습니다. 
 런타임을 선택했습니다. 
 이 경우에는 Python입니다. 
 핸들러의 이름을 지정하고 Python 코드를 추가했습니다. 
 여기서는 이 Python 코드를 추가했습니다. 
 이 Lambda 함수는 내 S3 버킷을 검사하고 있으며 퍼블릭 액세스가 있는 모든 버킷이 나타나면 이 함수는 해당 액세스를 호출한 후 알림을 전송합니다. 
 내 Lambda 함수를 구성하는 동안 환경 변수들을 구성할 수 있습니다. 
 암호화를 적용할 경우, 태그를 적용할 수 있습니다. 
 하나의 실행 역할을 선택할 수 있습니다. 
 이 경우, 운영에 필요한 권한을 내 Lambda 함수에 부여하는 하나의 역할을 선택했습니다. 
 이제 내 Lambda 함수, 메모리 할당(이 경우, 128MB) 및 실행 제한 시간을 각각 구성할 수 있습니다. 
 제한 시간을 최대 5분으로 설정해 보겠습니다. 
 그런 다음, 내 트리거를 구성해 보겠습니다. 
 이 경우, CloudWatch 이벤트를 사용해 내 Lambda 함수를 트리거합니다. 
 하나의 CloudWatch 이벤트가 있음을 알 수 있습니다. 
 활성화된 상태의 이 이벤트는 S3 버킷 내 변경 사항들을 관찰하는 데 사용되며 이러한 특정 Lambda 함수를 트리거합니다. 
 이제 내 Lambda 함수에 대한 모니터링 페이지를 볼 수 있는데, 여기서는 이 Lambda 함수가 네 번 실행되었음을 invocation count(호출 횟수)에서 확인할 수 있습니다. 
 이 Lambda 함수가 실행될 때 알림을 전송하면 IFal public이라는 이름의 S3 버킷에 관한 정보를 여기서 볼 수 있으며, 모든 사용자에 대해 액세스할 권한을 얻게 됩니다. 
 이 Lambda 함수는 이러한 버킷에서 퍼블릭 액세스를 호출했으며, AWS CloudTrail도 업데이트했습니다. 
 AWS Lambda를 이용하면 사실상 모든 애플리케이션 또는 백엔드 서비스에 대한 코드를 실행할 수 있습니다. 
 AWS Lambda의 용례로는 백업 자동화, Amazon S3에 업로드된 객체의 처리, 이벤트 중심의 로그 분석, 이벤트 중심의 변환, 사물 인터넷(IoT), 서버리스 웹사이트 운영 등이 있습니다. 
 이제 실시간 이미지 프로세싱의 용례를 살펴보기로 하겠습니다. 
 고객은 S3에 하나의 이미지를 업로드하면서, 이 이미지를 즉시 처리하기 위해 Lambda 함수를 트리거합니다. 
 이 함수를 사용하면 동영상, 썸네일, 인덱스 파일, 프로세스 로그 및 집계 데이터를 실시간으로 트랜스코딩할 수 있습니다. 
 AWS의 고객사 중 한 곳인 Seattle Times는 데스크톱 컴퓨터, 태블릿, 스마트폰 등 다양한 디바이스에서 이미지를 볼 수 있도록 이미지 크기를 조정하기 위해 AWS Lambda를 사용합니다. 
 AWS Lambda와 Amazon Kinesis를 사용하면 애플리케이션 활동 추적, 트랜잭션 주문 처리, 클릭스트림 분석, 데이터 정리 측정치, 생성 로그 필터링, 소셜 미디어 인덱싱 분석 및 디바이스 데이터 원격 측정과 모니터링 등을 목적으로 실시간 스트리밍 데이터를 처리할 수 있습니다. 
 AWS의 고객들은 S3에 저장되거나 Amazon Kinesis에서 스트리밍되는 과거 데이터 및 실시간 데이터를 처리하기 위해 AWS Lambda를 사용하여 실시간으로 수십억 개의 데이터 포인트를 처리합니다. 
 그들은 매월 1천억 건의 이벤트를 처리할 수 있습니다. 
 AWS Lambda를 이용하면 추출, 변환 및 로드(ETL) 파이프라인을 구축할 수 있습니다. 
 또한 AWS Lambda를 이용하면 데이터 검증, 필터링, 정렬을 수행하거나 혹은 DynamoDB 테이블 내 모든 데이터 변경에 대한 그 밖의 변환들을 수행할 수 있으며, 변환된 데이터를 다른 데이터 저장소에 로드할 수도 있습니다. 
 Zillow는 AWS Lambda 및 Amazon Kinesis를 사용해 모바일 측정치 중 일부를 실시간으로 추적하고 있습니다. 
 이 업체는 비용 효율적인 솔루션을 불과 2주 만에 개발 및 배포할 수 있습니다. 
 AWS Lambda를 이용하면 IoT 디바이스를 위한 백엔드를 구축할 수도 있습니다. 
 API Gateway를 AWS Lambda와 결합하면 모바일 백엔드를 쉽게 구축할 수 있습니다. 
 API Gateway를 이용하면 그러한 API 요청들을 매우 간편하게 인증하고 처리할 수 있으며, AWS Lambda를 이용하면 풍부한 개인화 앱 환경을 매우 쉽게 구축하고 개발할 수 있습니다. 
 AWS의 고객들은 대부분 AWS Lambda, Amazon SNS 및 API Gateway를 이용한 마이크로 서비스 백엔드를 사용하여 자사의 웹사이트와 모바일 애플리케이션을 모두 실행하고 있습니다. 
 AWS Lambda와 그 외 AWS 서비스를 결합하면 AWS Lambda를 이용해 웹 백엔드를 구축할 수도 있습니다. 
 개발자들은 자동으로 확장 및 축소되는 고성능 웹 애플리케이션을 구축할 수 있습니다. 
 그러한 애플리케이션들은 다수의 데이터 센터에 걸쳐 고가용성의 구성 환경에서 실행되기 때문에 확장성, 백업 및 다중 데이터 센터 중복성을 구현하기 위해 관리상의 수고를 할 필요가 없습니다. 
 요컨대 AWS Lambda는 마이크로 서비스 아키텍처 구축에서부터 애플리케이션 실행에 이르기까지 AWS 서비스의 결합 조직인 셈입니다. 
 오늘 이 시간에 뭔가 조금이라도 배운 후에 나머지 강의를 계속 들으시기 바랍니다. 
 강의를 마칩니다. 
 저는 AWS 교육 및 자격증 팀의 Ian Falconer였습니다. 
 시청해 주셔서 감사합니다. 
 
 - AWS Elastic Beanstalk
 AWS Elastic Beanstalk에 대한 소개를 시작하겠습니다. 
 저는 AWS(Amazon Web Services)의 EMEA(유럽, 중동 및 아프리카) 지역 담당 기술 강사인 Wilson Santana입니다. 
 이번 동영상에서는 AWS Elastic Beanstalk 서비스를 간략히 소개합니다. 
 또한 솔루션의 구성 요소에 대해서도 논의하는 한편, 해당 제품과 그 이점 및 기능들을 시연해보기로 하겠습니다. 
 각자 웹 서버의 개발자라고 생각하고 이번 강의 동영상을 시청하시기 바랍니다. 
 실제로 시스템의 전체 관리를 개발하고 실제 서버 개발의 배후에 있는 모든 것들을 관리하는 일에 대해서는 고민할 필요가 없을 것입니다. 
 아마도 내 애플리케이션을 클라우드로 신속하게 가져올 수 있는 방법이 궁금할 것입니다. 
 시스템 개발을 시작할 수 있도록 전체 환경을 신속하게 준비할 방법이 있다면 과연 무엇일까요? 이 질문에 대한 해답은 AWS Elastic Beanstalk에 있습니다. 
 그렇다면 AWS Elastic Beanstalk는 실제로 어떻게 작동할까요? 이 시스템의 이점과 특징으로는 무엇이 있을까요? AWS Elastic Beanstalk은 서비스로서의 플랫폼(PaaS)에 속하는데, 여기서 서비스로서의 플랫폼(PaaS)이라는 것은 사용 중인 코드를 필요에 따라 시스템에 간단히 배치할 수 있도록 전체 인프라와 전체 플랫폼이 생성되었다는 것을 의미합니다. 
 이것을 활용하면 사용자의 애플리케이션을 신속하게 배포할 수도 있습니다. 
 이전에 일부 특정 언어로 작성한 모든 코드는 사용자가 보유한 플랫폼에 간단히 배치할 수 있습니다. 
 또한 AWS Elastic Beanstalk은 관리상의 복잡성을 줄여줍니다. 
 전체 시스템을 관리하는 것에 대해서는 걱정할 필요가 없으며 다만 원한다면 전체 시스템을 완벽하게 제어할 수도 있습니다. 
 사용자를 위해 개발된 시스템을 제어하면 필요에 따라 인스턴스 유형을 선택하거나 데이터베이스를 선택할 수 있습니다. 
 또한 AWS Elastic Beanstalk을 사용하면 필요에 따라 Auto Scaling 설정값을 조정할 수 있습니다. 
 뿐만 아니라, 사용 중인 애플리케이션을 업데이트하고 서버 로그 파일에 액세스하며 애플리케이션의 요구에 따라 로드 밸런서에서 HTTPS를 활성화할 수도 있습니다. 
 또한 AWS Elastic Beanstalk은 다양한 종류의 플랫폼을 지원합니다. 
 이 PaaS는 패키지 빌더, 단일 컨테이너 또는 다중 컨테이너 또는 사전 구성된 도커(Preconfigured Docker)를 출처로 합니다. 
 이는 Go, Java with Tomcat, Java SE, Windows 기반 .NET, Node.js, PHP, Python 및 Ruby를 각각 지원합니다. 
 따라서 사용자의 기술과 웹 서버 개발 아이디어에 따라 코드를 작성하면 되며, Elastic Beanstalk를 사용하면 필요에 따라 사용자의 환경을 배포할 수 있습니다. 
 Elastic Beanstalk는 모든 애플리케이션 서비스, HTTP 서비스, 운영 체제(OS), 언어 해석기 및 호스트를 제공합니다. 
 여기서는 사용 중인 서비스의 요구에 따라 코드를 생성, 배포 및 준비한 후, 필요에 따라 애플리케이션을 사용하기만 하면 됩니다. 
 이를 통해 원하는 것을 매우 쉽게 구현하게 됩니다. 
 또한 사용 중인 서버는 애플리케이션 생성만을 기반으로 하여 단계별로 배포 및 업데이트할 수 있습니다. 
 그런 다음, 해당 버전들을 Beanstalk으로 업로드한 후 사용 중인 애플리케이션의 요구에 따라 클라우드에서 필요한 환경들을 모두 시작합니다. 
 그 후에는 사용자의 환경을 관리할 수 있으며 새 버전을 작성해야 할 경우, 해당 버전을 업데이트하면 됩니다. 
 여기서 중요한 것은 사용자가 환경을 관리할 수 있다는 점입니다. 
 이 사이클을 이용하면 애플리케이션을 배포하는 것만큼이나 쉽게 애플리케이션을 업데이트할 수 있게 됩니다. 
 이제 제품 시연으로 넘어가 Elastic Beanstalk의 특징과 이점들을 시연해보기로 하겠습니다. 
 실제로 여러 웹 서버를 생성했으며 가령 이 서비스를 Python으로 작성했고 모든 코드가 여기에 있는 경우를 생각해볼 수 있습니다. 
 애플리케이션이 올바르게 압축된 이것은 정말 간단한 코드에 속합니다. 
 그렇다면 이제 무엇을 해야 할까요? 전 세계의 모든 사람들에게 내 웹 서비스를 제대로 보여주려면 해당 환경을 실제로 어떻게 사용할 수 있을까요? Beanstalk을 이용하면 이런 작업을 매우 수월하게 처리할 수 있습니다. 
 실제로 서비스를 시작하기만 하면 되기 때문입니다. 
 이제 대시보드로 이동해 Elastic Beanstalk을 살펴보기로 하겠습니다. 
 여기서는 새 애플리케이션을 생성하기만 하면 됩니다. 
 애플리케이션의 이름을 입력하면 됩니다. 
 애플리케이션의 이름을 BeanstalkDemo로 입력해 보겠습니다. 
  그리고 This is a demo를 간단한 설명으로 입력합니다. 
 여기서는 애플리케이션을 위한 환경을 생성하는 것만 알아두면 됩니다. 
 그렇다면 이제 무엇을 해야 할까요? 하나의 환경을 지금 생성합니다. 
 이것은 웹 서버 환경에 속합니다. 
 작업 애플리케이션을 실행하고 장시간 실행 중인 온디맨드 워크로드를 처리하거나 여러 작업 및 일정을 처리해야 할 경우, 하나의 작업 환경으로 이동할 수 있으며 다만 이러한 상황에서는 단순한 웹 서버 환경 이외에 생성된 것이 없습니다. 
 여기로 이동해 Web server environment를 선택한 다음, 내 요구에 따라 몇몇 추가 데이터를 입력합니다. 
 수동으로 생성하려는 모든 도메인 데이터를 여기에 입력하되 자동으로 생성되는 도메인 데이터에 대해서는 입력하지 않고 빈 칸으로 두면 됩니다. 
  이제 This is a demo라는 설명을 입력해 보겠습니다. 
  여기에는 일부 옵션들이 있습니다. 
 그리고 이미 언급한 언어와 지원을 포함하는 사전 구성된 플랫폼들이 일부 있습니다. 
 이 경우에는 내 애플리케이션이 Python으로 작성되었기 때문에 Python을 선택해 보겠습니다. 
 또한 여기에는 샘플 애플리케이션을 실행하는 옵션도 있습니다. 
 따라서 Elastic Beanstalk와 연동할 애플리케이션 없이 지금 바로 계정을 사용하려는 경우, 이를 매우 간단하게 처리할 수 있습니다. 
 Elastic Beanstalk를 열어 샘플 애플리케이션을 실행하면 됩니다. 
 이 경우에는 Python으로 작성된 코드가 있습니다. 
 이제 코드를 업로드한 후 하나의 URL로 이동해 이를 업로드해 보겠습니다. 
 지금 갖고 있는 로컬 파일을 선택해 보겠습니다. 
 이 파일을 사용할 수 있는 경우, S3 URL을 이 파일에 업로드하기만 하면 됩니다. 
 내 파일은 zip 형식의 압축 파일로 되어 있는데 이 파일을 업로드하겠습니다. 
 기본 구성을 변경하려는 경우, Configure more options로 이동하면 됩니다. 
 여기 이 영역에서는 프리 티어 기본값에 해당하는 Low cost를 선택하거나 High availability 또는 Custom configuration을 선택할 수 있습니다. 
 본 예제에서는 이 모든 옵션을 기본값으로 계속 유지합니다. 
 다만 여기 이 예제에서는 기본값을 생성한 후에 이 값이 어떻게 유연하게 변동할 수 있는지를 볼 수 있습니다. 
 이제 환경을 생성해 보겠습니다. 
 이제 시스템은 전체 환경은 물론, 필요한 모든 인스턴스와 네트워킹 환경을 생성하고 있습니다. 
 사용 중인 애플리케이션에서 데이터베이스가 필요하거나 고가용성의 환경에서 뭔가를 추가로 배포해야 할 경우, 여기서 모든 단계가 진행 및 표시됩니다. 
 이러한 과정은 애플리케이션의 크기에 따라 대략 5분 내지 10분 정도 소요될 수 있으며 때로는 그보다 훨씬 더 긴 시간이 소요될 수도 있습니다. 
 소요 시간을 단축하기 위해 사용 가능한 코드와 똑같은 코드로 된 하나의 환경을 이미 생성했습니다. 
 이 환경에서는 모든 준비가 완료되는 시기를 확인하게 됩니다. 
 모든 준비가 완료되면 이와 같은 대시보드를 갖게 되는데 이 대시보드는 사용자가 이미 생성한 것들을 보여주며 이를 통해 새로운 버전들을 업로드하고 배포할 수 있습니다. 
 무엇보다 중요한 것은 이러한 URL을 갖게 된다는 점입니다. 
 이 URL은 누구나 어디서든지 액세스할 수 있도록 애플리케이션에 맞게 생성되었습니다. 
 이 URL을 클릭하면 사용자의 코드가 배포되었으며 사용자의 요구에 따라 생성되었는지 확인할 수 있습니다. 
 또한 여기서는 이 모든 제어를 명령줄 인터페이스(CLI) 및 스크립트를 사용해 수행할 수도 있습니다. 
 분명한 것은 이미 생성된 환경은 해당 코드에 따라 필요한 것을 제공한다는 점이며, 때문에 시스템의 아키텍처에 대해 미리 고민할 필요는 없습니다. 
 따라서 개발자들은 이처럼 매우 손쉬운 방법으로 코드를 생성하여 이를 실제 시나리오에서 사용할 수 있습니다. 
 본 프레젠테이션에서 뭔가 필요한 것을 익히시기 바랍니다. 
 저는 Wilson Santana였습니다. 
 시청해 주셔서 감사합니다. 
 
 - Application Load Balancer
 안녕하세요, Application Load Balancer 소개에 오신 것을 환영합니다. 
 이 동영상에서는 Elastic Load Balancing(ELB) 서비스에 포함된 두 번째 로드 밸런서 유형인 Application Load Balancer에 대해 소개합니다. 
 저는 Seph Robinson입니다. 
 AWS에서 근무한 지는 5년이 넘었는데 현재는 Amazon Web Services(AWS)를 사용하는 고객에게 교육을 제공하는 기술 강사로 일하고 있습니다. 
 이 동영상에서는 먼저 Application Load Balancer의 개요를 살펴보고 이 서비스에 포함된 주요 기능을 몇 가지 소개합니다. 
 그런 다음 Application Load Balancer를 활용할 수 있는 몇 가지 사용 시나리오를 알아봅니다. 
 마지막으로 로드 밸런서 자체를 간략하게 시연합니다. 
 로드 밸런서란 무엇입니까? Application Load Balancer는 앞서 설명한 대로 Elastic Load Balancing 서비스의 일환으로 출시된 두 번째 유형의 로드 밸런서입니다. 
 이 로드 밸런서는 Classic Load Balancer가 제공하는 기능을 대부분 제공하는 이외에, 몇 가지 중요한 기능 및 개선 사항을 추가하여 독자적인 사용 사례를 구현할 수 있습니다. 
 간단히 살펴보면 새로 향상된 기능으로 지원되는 요청 프로토콜이 추가되었고 지표 및 액세스 로그가 개선되었으며 상태 확인 대상이 확대되었습니다. 
 Application Load Balancer의 추가 기능으로 경로 또는 호스트 기반 라우팅을 사용하는 요청에 대한 추가 라우팅 메커니즘, VPC에서 IPV6 기본 지원, AWS 웹 애플리케이션 통합 등이 있습니다. 
 Application Load Balancer를 사용할 수 있는 시나리오는 매우 다양합니다. 
 컨테이너를 사용하여 마이크로 서비스를 호스트하고 단일 로드 밸런서로부터 이러한 애플리케이션으로 라우팅하는 것이 한 가지 시나리오입니다. 
 Application Load Balancer를 사용하면 서로 다른 요청을 동일한 인스턴스로 라우팅하되 포트에 따라 다른 경로를 지정할 수 있습니다. 
 다양한 포트에서 수신 대기하는 여러 컨테이너가 있을 경우 라우팅 규칙을 설정하여 원하는 백엔드 애플리케이션으로만 트래픽을 분배할 수 있습니다. 
 Application Load Balancer에 대해 알아볼 때 배워야 할 새로운 용어가 몇 가지 있습니다. 
 리스너는 기본적으로 동일하지만 이제 대상을 대상 그룹으로 그룹화할 수 있습니다. 
 Application Load Balancer는 인스턴스 대신 대상을 등록하므로 대상 그룹이 로드 밸런서에 대상이 등록되는 방식입니다. 
 여기에 Application Load Balancer가 백엔드 대상을 라우팅하고 구성하는 방식을 볼 수 있습니다. 
 로드 밸런서에 대해 리스너를 구성할 때 로드 밸런서가 수신하는 요청이 백엔드 대상으로 라우팅되는 방식을 지정하기 위해 규칙을 생성합니다. 
 이러한 대상을 로드 밸런서에 등록하고 로드 밸런서가 대상에 사용하는 상태 확인을 구성하려면 대상 그룹을 생성합니다. 
 여기에서 보듯이 대상은 여러 대상 그룹의 멤버가 될 수 있습니다. 
 앞서 설명한 대로 Application Load Balancer는 향상된 기능과 추가된 기능을 모두 포함하고 있습니다. 
 Application Load Balancer는 HTTP/2 및 WebSockets 지원을 추가하여 지원 프로토콜을 개선했습니다. 
 또한 지표 차원을 추가하고 보다 세분화된 상태 확인을 수행하며 액세스 로그에서 세부 정보를 추가하여 모니터링 기능을 확장했습니다. 
 현재 지원되는 추가 기능에는 경로 및 호스트 기반 라우팅이 있습니다. 
 경로 기반 라우팅에서는 요청 내 URL을 기반으로 대상 그룹으로 라우팅하는 규칙을 생성할 수 있습니다. 
 호스트 기반 라우팅에서는 동일한 로드 밸런서가 여러 도메인을 지원할 수 있고 요청된 도메인을 기반으로 요청을 대상 그룹으로 라우팅할 수 있습니다. 
 이밖에 요청 추적을 사용하여 클라이언트에서 대상까지 요청을 추적할 수 있고 EC2 Container Service 예약 컨테이너를 사용할 때 동적 호스트 포트를 설정할 수 있습니다. 
 이제 Application Load Balancer 데모를 간략하게 살펴보겠습니다. 
 시작은 AWS Management Console입니다. 
 로드 밸런서를 생성하기 위해 EC2 콘솔로 이동합니다. 
 EC2 콘솔에 두 개의 인스턴스가 이미 실행 중인 것이 보일 것입니다. 
 로드 밸런서를 시연하는 동안 인스턴스가 시작할 때까지 기다릴 필요가 없도록 제가 미리 실행한 것입니다. 
 설정된 내용을 확인하고 테스트하기 위해 앞서 생성한 애플리케이션 ELB 테스트 인스턴스를 살펴보겠습니다. 
 이 인스턴스를 보면서 두 개의 컨테이너가 두 개의 포트에서 수신 대기하는지 확인하겠습니다. 
 이를 위해 인스턴스의 퍼블릭 IP 주소를 복사한 다음, 웹 브라우저 탭에서 데모를 위해 설정한 페이지로 이동합니다. 
 첫 번째 페이지는 포트 80에서 수신 대기만 하는 test.html입니다. 
 이 사이트로 이동하면 Container One이 작동하는 것이 보일 것입니다. 
 다른 포트에서 수신 대기하는지 보려면 포트 443으로 이동하고 동일한 페이지 위치로 이동합니다. 
 그러면 두 번째 컨테이너가 실행 중임을 알 수 있습니다. 
 이제 확인을 마쳤으므로 계속해서 Application Load Balancer를 생성하겠습니다. 
 측면 탐색 창에서 Load Balancers로 이동합니다. 
 생성된 로드 밸런서가 없는 것이 보일 것입니다. 
 Application Load Balancer를 생성하기 위해 먼저 Create Load Balancer를 클릭합니다. 
 여기서는 기본값 Application Load Balancer를 그대로 유지합니다. 
 그런 다음 Continue를 클릭합니다. 
 여기에서 로드 밸런서 구성을 시작합니다. 
 먼저 로드 밸런서 이름을 지정합니다. 
 여기에서 지정하는 이름이 이 로드 밸런서의 DNS 엔드포인트에 적용된다는 점을 숙지하고, 이 로드 밸런서의 이름을 Application Load Balancer의 준말인 ALB로 지정하고 테스트합니다. 
 이것은 인터넷 경계, 즉 공개적으로 참조할 수 있는 DNS 엔드포인트를 가지는 로드 밸런서입니다. 
 그래서 주소 유형을 기본값 IPV4 그대로 유지합니다. 
 로드 밸런서의 리스너의 경우, 기본 설정은 이미 포트 80에서 수신 대기하는 것이지만, 동일한 로드 밸런서에서 두 번째 컨테이너로 라우팅할 수 있도록 추가 리스너를 추가하겠습니다. 
 이것은 포트 443에 대한 간단한 HPPT 요청이 될 것입니다. 
 이제 로드 밸런서를 실행할 가용 영역을 선택합니다. 
 Application Load Balancer에서는 두 개 이상의 가용 영역을 선택해야 합니다. 
 그러므로 제가 이 데모를 위해 생성한 VPC를 선택하고 제가 서브넷을 생성해 놓은 가용 영역 두 개를 선택하겠습니다. 
 그런 다음 로드 밸런서에 태그를 지정할 수 있는 옵션이 있습니다. 
 로드 밸런서에 태그를 지정하려면 이 로드 밸런서를 참조할 키와 값을 지정하기만 하면 됩니다. 
 여기서 빌드하는 로드 밸런서는 키를 Name으로 설정하고 값을 Application Load Balancer로 설정하겠습니다. 
 이제 보안 설정을 구성할 수 있습니다. 
 이 페이지에서 SSL 리스너를 사용하는 것으로 보안 설정을 구성했을 것이지만 실제로는 그렇지 않으므로 계속해서 보안 그룹을 구성하는 다음 페이지로 이동합니다. 
 로드 밸런서에 대해 기본 보안 그룹을 선택 취소하고 제가 이 로드 밸런서에 대해 설정한 테스트 웹 서버 보안 그룹을 선택합니다. 
 이제 라우팅을 구성할 수 있습니다. 
 여기서 로드 밸런서의 백엔드 대상에 대한 라우팅 규칙을 구성할 수 있습니다. 
 미리 생성한 대상 그룹이 없기 때문에 새 대상 그룹 세트를 유지합니다. 
 그런 다음 대상 그룹에 이름을 지정합니다. 
 이 대상 그룹의 이름을 Demo One이라고 하겠습니다. 
 이 대상 그룹이 사용하는 프로토콜은 HTTP이고, 포트는 80입니다. 
 상태 확인에 대해, 트래픽은 HTTP 요청으로 유지하고 상태 확인 대상은 앞서 설정한 간단한 웹 페이지, 즉 test.html으로 지정합니다. 
 또한 Advanced Health Check Settings로 이동할 수도 있습니다. 
 여기서 상태 확인을 수행하는 방식을 조정할 수 있습니다. 
 조기에 대상이 정상 상태인지 확인하기 위해 상태 확인 주기를 10초로 낮출 것입니다. 
 하지만 시간 초과 및 정상/이상 임계값은 그대로 유지하겠습니다. 
 이제 대상을 등록하겠습니다. 
 대상을 등록하면 로드 밸런서에게 해당 포트를 적중시킬 인스턴스를 알려주므로 인스턴스를 설정하고 앞서 설정한 애플리케이션 ELB 테스트 인스턴스를 선택하겠습니다. 
 이 인스턴스를 선택한 다음 Add to Registered를 클릭합니다. 
 등록된 대상 중 하나로 나열된 것이 보일 것입니다. 
 계속해서 검토 페이지로 이동합니다. 
 Review 페이지에서 앞서 구성한 내용을 모두 확인할 수 있습니다. 
 로드 밸런서의 이름, 설정된 리스너 및 라우팅 규칙, Demo One으로 설정한 새로운 대상 그룹이 표시됩니다. 
 이제 Create를 클릭할 수 있습니다. 
 로드 밸런서가 성공적으로 생성되었습니다. 
 이제 화면을 닫으면 로드 밸런서 대시보드로 이동합니다. 
 이 로드 밸런서로 확인하려는 대상이 두 개이므로 두 번째 대상을 등록하려면 먼저 대상 그룹을 생성해야 합니다. 
 Target Group 아래에서 Create Target Group을 선택합니다. 
 그러면 이 새 대상 그룹이 앞서 설정한 두 번째 컨테이너로 갑니다. 
 이 대상 그룹은 이름이 Demo Two이고 트래픽이 HTTP 요청합니다. 
 하지만 요청을 포트 443으로 전달할 것입니다. 
 이는 VPC도 동일하고 상태 확인 대상도 동일하지만 다른 별도의 컨테이너에서 이루어지므로 test.html이 될 것입니다. 
 이번에도 Advanced Health Check 설정 아래에서 상태 확인을 조정할 수 있습니다. 
 마찬가지로 주기를 10초로 낮추겠습니다. 
 이제 대상 그룹을 생성할 수 있습니다. 
 두 번째 대상 그룹이 성공적으로 생성된 것으로 나옵니다. 
 이 두 번째 대상 그룹에서, 인스턴스를 대상으로 등록했는지 확인해야 합니다. 
 등록이 완료되었으므로 이제 로드 밸런서를 검토하고 로드 밸런서에서 수신 대기하도록 두 포트 모두 설정되었는지 확인할 수 있습니다. 
 하지만 로드 밸런서를 생성할 때 포트 443을 설정했기 때문에 현재 로드 밸런서가 트래픽을 Demo One으로 전달합니다. 
 이를 변경하려면 View and Edit Rules를 클릭하고 Then 아래에서 트래픽을 Demo One이 아니라 Demo Two로 전달하도록 앞서 생성한 규칙을 수정합니다. 
 그런 다음 Update와 로드 밸런서에서 포트 443에 도달하는 트래픽을 라우팅하는 규칙을 클릭합니다. 
 이제 로드 밸런서가 트래픽을 대상 그룹 Demo Two로 전달합니다. 
 이제 뒤로 돌아가 로드 밸런서를 확인할 수 있습니다. 
 두 번째 대상 그룹을 생성하여 로드 밸런스에 등록했으므로 테스트를 통해 트래픽이 각 컨테이너로 전송되는지 확인할 수 있습니다. 
 이렇게 하려면 다시 DNS 이름을 복사한 다음 첫 번째 컨테이너용의 새 탭에 DNS 이름을 붙여 넣고 이 데모를 위해 설정한 대상인 test.html으로 이동하여 Container One이 사용 가능한지 확인할 수 있습니다. 
 두 번째 컨테이너를 테스트하기 위해 포트 443에서 수신 대기하는 로드 밸런서가 있으므로 이를 포트 443으로 가도록 설정하겠습니다. 
 그러면 Container Two가 수신 대기하는 인스턴스에서 트래픽이 443으로 전달되어야 합니다. 
 ENTER를 누르면 이제 Container Two가 실행되는 것을 확인할 수 있습니다. 
 로드 밸런서의 수신 대기를 조정하려면 언제나 Listeners 탭으로 이동하여 리스너를 추가하거나 실행 중인 리스너를 수정할 수 있습니다. 
 요약하자면 이 데모에서는 Application Load Balancer를 시작하고, 라우팅 규칙을 구성하고, 로드 밸런서에 대상을 등록하고, Application Load Balancer의 라우팅 동작을 확인하는 절차를 시연했습니다. 
 이 과정이 약간이라도 도움이 되었기를 바라며 계속해서 다른 동영상을 학습하시기 바랍니다. 
 AWS 교육 및 자격증의 Seph Robinson이었습니다. 
 시청해 주셔서 감사합니다. 
 
 - 탄력적 로드발랜서
 Amazon Elastic Load Balancing 소개 이 동영상에서는 탄력적 로드 밸런서의 원래 유형인 Classic Load Balancer를 소개합니다. 
 저는 Amazon Web Services(AWS)의 기술 강사 Seph입니다. 
 AWS에서 근무한지는 5년이 넘었네요. 
 이 동영상에서는 Classic Load Balancer에 대해 살펴볼 것입니다. 
 간략한 서비스 소개부터 시작하여 몇몇 주요 기능을 개략적으로 설명합니다. 
 그런 다음 로드 밸런서를 시작하는 절차를 간략하게 시연합니다. 
 Classic Load Balancer는 분산형 소프트웨어 로드 밸런싱 서비스로, 이 관리형 솔루션은 유용한 기능을 다수 포함하고 있습니다. 
 Elastic Load Balancing을 선택할 수 있는 다양한 시나리오는 유일하게 노출되는 액세스 포인트를 통해 웹 서버 액세스를 보호하거나, 애플리케이션 환경을 결합 해제하거나, 퍼블릭(또는 인터넷 경계) 및 내부 로드 밸런서를 함께 사용하거나, 트래픽을 여러 가용 영역으로 분산하여 고가용성 및 내결함성을 제공하거나 최소한의 오버헤드로 탄력성 및 확장성을 제고하는 것이 될 수 있습니다. 
 트래픽 분산의 경우, Elastic Load Balancing이 트래픽을 분산하는 능력은 어떤 유형의 요청을 분산하는가에 달려 있습니다. 
 TCP 요청을 분산하는 경우 Elastic Load Balancing은 이러한 요청에 대해 단순 라운드 로빈을 사용합니다. 
 HTTP 또는 HTTPS 요청을 처리하는 경우 Elastic Load Balancing이 백엔드 인스턴스에 대해 최소 대기 요청을 사용합니다. 
 또한 Elastic Load Balancing은 여러 가용 영역으로 트래픽을 분산하는 것을 돕습니다. 
 AWS Management Console에서 로드 밸런서를 생성할 경우 이 기능이 기본적으로 활성화됩니다. 
 하지만 명령줄 도구 또는 SDK를 통해 Elastic Load Balancing을 시작할 경우에는 보조 프로세스로 활성화해야 합니다. 
 앞서 설명한 대로, Elastic Load Balancing은 백엔드 인스턴스에 액세스하기 위한 유일하게 노출되는 액세스 포인트를 제공합니다. 
 이를 위한 가장 간편한 방법은 도메인의 CNAME를 Elastic Load Balancing용 엔드포인트로 가리키는 별칭(Alias) 레코드를 설정하는 것입니다. 
 애플리케이션에 쿠키를 사용하려는 경우 Elastic Load Balancing이 고정 세션의 기능을 제공합니다. 
 그러면 해당 세션 동안 사용자 세션을 바인딩할 수 있으며, 이는 기간 기반 쿠키 또는 애플리케이션 제어 고정 세션을 사용할지 여부에 따라 설정됩니다. 
 모니터링에 관한 한, Elastic Load Balancing은 다양한 지표를 기본적으로 제공합니다. 
 이러한 지표를 사용하여 HTTP 응답, 로드 밸런서 뒤의 정상/비정상 호스트 수를 확인할 수 있으며, 백엔드 인스턴스의 가용 영역을 기반으로 또는 사용 중이던 로드 밸런서를 기반으로 이러한 지표를 필터링할 수 있습니다. 
 상태 확인의 경우, 로드 밸런서를 사용하여 로드 밸런서 뒤의 정상/비정상 EC2 호스트의 수를 확인할 수 있습니다. 
 이 확인은 백엔드 EC2 인스턴스에 대한 간단한 연결 시도 또는 ping 요청을 통해 이루어집니다. 
 로드 밸런서는 VPC 내부의 여러 가용 영역으로 트래픽을 분산시킬 수 있는 다중 영역 로드 밸런싱을 제공하여 확장성을 높이도록 지원합니다. 
 또한 로드 밸런서 자체가 처리하는 트래픽 패턴에 따라 확장됩니다. 
 Classic Load Balancer에서는 여러 유형의 로드 밸런서를 생성할 수 있습니다. 
 한 유형은 인터넷 경계 또는 퍼블릭 로드 밸런서입니다. 
 이 유형은 여전히 교차 영역 밸런싱이 가능하며 로드 밸런서의 유일하게 노출되는 엔드포인트에서 백엔드 인스턴스로 요청을 라우팅할 수 있게 해주는 공개적으로 확인할 수 있는 DNS 이름을 제공합니다. 
 다른 유형의 로드 밸런서는 내부 로드 밸런서입니다. 
 내부 로드 밸런서는 프라이빗 노드로만 확인되어 VPC를 통해야만 액세스할 수 있는 DNS 이름을 가집니다. 
 이는 VPC 내부 인프라의 결합 해제를 제공하며 프론트 엔드 및 백엔드 인스턴스 모두에 대한 확장이 가능하면서도 로드 밸런서가 자체의 확장을 처리합니다. 
 이제 Classic Load Balancer 데모를 간략하게 살펴보겠습니다. 
 이 데모에서는 로드 밸런서를 시작하고 이 로드 밸런서에 인스턴스를 연결합니다. 
 그런 다음 트래픽이 백엔드 인스턴스로 라우팅되는지 확인할 것입니다. 
 제가 이 데모를 위해 이미 EC2 인스턴스를 시작했습니다. 
 EC2 인스턴스는 인터넷 게이트웨이가 연결된 VPC에 위치하며 퍼블릭 서브넷에 위치합니다. 
 그러므로 이 간단한 웹 애플리케이션이 EC2 인스턴스에서 실행되는지 확인하려면 간단히 인스턴스의 퍼블릭 IP 주소를 가져와 새 탭에 실행할 수 있습니다. 
 보다시피 여기에 인스턴스의 퍼블릭 IP 주소가 표시되고 이 인스턴스의 ID와 가용 영역이 표시됩니다. 
 이 인스턴스를 로드 밸런서 뒤에 배치하려면 EC2 콘솔의 탐색 창에서 아래로 Load Balancing까지 스크롤합니다. 
 여기서 Load Balancers를 클릭하면 Load Balancing 콘솔로 이동합니다. 
 Load Balancing 콘솔에서 Create Load Balancer를 클릭합니다. 
 이 데모에서는 Classic Load Balancer를 사용할 것이므로 이 로드 밸런서를 선택하고 Continue를 선택합니다. 
 이제 로드 밸런서 이름을 지정합니다. 
 로드 밸런서에 지정하는 이름이 로드 밸런서의 DNS 엔드포인트에 적용된다는 점을 기억하십시오. 
 이 데모의 로드 밸런서는 테스트 목적의 Classic Load Balancer이므로 clb, test로 이름을 지정합니다. 
 Create ELB Inside는 로드 밸런서를 생성할 환경입니다. 
 그러므로 EC2 Classic을 사용하는 경우, EC2 Classic에서 로드 밸런서를 생성할 수 있습니다. 
 이 데모에서는 제가 이미 생성한 클래식 ELB Test VPC를 사용합니다. 
 리스너를 구성하기 위해 먼저 로드 밸런서가 트래픽을 수신할 위치를 선택한 다음 로드 밸런서가 트래픽을 전달할 인스턴스 포트를 선택합니다. 
 여기 이 로드 밸런서는 포트 80에서 수신 대기하고, 또 백엔드 인스턴스의 포트 80으로 트래픽을 전달합니다. 
 이제 VPC에서 ELB가 작동할 서브넷을 선택합니다. 
 이 목적으로 제가 PrivateSubnet 1을 생성했습니다. 
 이것을 로드 밸런서에 추가하겠습니다. 
 다음으로 로드 밸런서에 보안 그룹을 할당합니다. 
 로드 밸런서에 사용할 보안 그룹은 제가 이미 생성해 놓은 기존 보안 그룹입니다. 
 이 로드 밸런서에 퍼블릭 클래식 로드 밸런싱 테스트 보안 그룹을 사용합니다. 
 퍼블릭 ELB이기 때문입니다. 
 그런 다음 보안 설정을 구성합니다. 
 하지만 이 로드 밸런서에 SSL이 사용되지 않으므로 보안 설정은 사용하지 않을 것입니다. 
 계속해서 상태 확인을 구성합니다. 
 상태 확인은 로드 밸런서가 요청을 전송하여 인스턴스가 실행 중인지 또는 인스턴스를 열외시켜야 하는지 여부를 확인하는 것입니다. 
 저는 상태 확인을 단순한 ping 요청으로 하겠습니다. 
 대상은 실제로 index.html이 아니라 index.php가 됩니다. 
 주기는 상태 확인이 전송되는 빈도이며 Response Timeout은 상태 확인을 실패로 간주할 때까지 로드 밸런서가 대기하는 시간입니다. 
 이 테스트에서는 주기를 10초로 줄이겠습니다. 
 Unhealthy threshold는 로브 밸런서가 인스턴스를 비정상으로 간주하는 데 필요한 상태 확인 요청 연속 실패 횟수이고, Healthy threshold는 로드 밸런서가 이전의 비정상 인스턴스를 정상으로 간주하는 데 필요한 테스트 연속 성공 횟수입니다. 
 이제 로드 밸런서에 미리 생성해 놓은 EC2 인스턴스를 추가합니다. 
 현재 실행 중인 인스턴스가 하나뿐이므로 이 인스턴스를 선택하여 로드 밸런서에 연결하겠습니다. 
 Add Tags 단계는 간편한 분류를 위해 로드 밸런서에 태그를 지정하려는 경우에 사용합니다. 
 이 로드 밸런서를 키 Name으로 태그 지정하고 값은 CLB test로 하겠습니다. 
 그런 다음 Review와 Create를 클릭합니다. 
 그러면 이 로드 밸런서에 대한 모든 설정을 검토할 수 있습니다. 
 설정을 확인했으면 Create를 클릭할 수 있습니다. 
 성공적으로 생성된 로드 밸런서 화면이 보이면 화면을 닫습니다. 
 그러면 Load Balancing 콘솔로 이동합니다. 
 Load Balancing 콘솔에서 로드 밸런서에 대한 세부 정보를 볼 수 있습니다. 
 Descriptions 탭에서 로드 밸런서의 DNS 엔드포인트, 로드 밸런서의 서브넷 및 가용 영역, 생성된 로드 밸런서의 유형 등 기본 세부 정보를 확인할 수 있습니다. 
 이 데모에서는 인터넷 경계 로드 밸런서를 생성했습니다. 
 로드 밸런서에 연결된 인스턴스를 보려면 Instances 탭을 클릭하면 됩니다. 
 이 화면에서 현재 로드 밸런스에 연결된 모든 인스턴스가 표시되며 수동으로 로드 밸런서에 인스턴스를 추가하거나 제거할 수 있습니다. 
 현재 인스턴스가 사용할 수 없는 것으로 나타납니다. 
 이는 인스턴스가 정상으로 간주되기 위해 필요한 횟수만큼 상태 확인을 통과하지 않았기 때문입니다. 
 마우스를 Information 탭으로 가져가면 인스턴스 등록이 아직 진행 중인 것으로 나옵니다. 
 계속해서 상태 확인 세부 정보를 확인할 수 있고 로드 밸런서에서 설정한 리스너를 확인할 수 있습니다. 
 이 모든 설정을 로드 밸런서가 실행 중인 상태에서 편집할 수 있습니다. 
 로드 밸런서의 모니터링 지표를 보려면 Monitoring 탭으로 이동할 수 있습니다. 
 로드 밸런서가 그리 오래 실행된 것이 아니므로 아직 지표가 보이지 않습니다. 
 참고로 Amazon CloudWatch 지표의 기본 주기는 5분입니다. 
 Instance 탭으로 돌아가면 이제 인스턴스가 사용 상태로 표시되고 있습니다. 
  로드 밸런서의 DNS 이름을 복사하여 새 탭에 붙여 넣은 다음 여전히 인스턴스 세부 정보가 표시되는지 확인할 수 있습니다. 
 이제 인스턴스에 직접 액세스한 것이 아니라 로드 밸런스를 통해 인스턴스에 액세스한 것입니다. 
이 데모에서는 Classic Load Balancer를 시작했습니다. 
 그리고 로드 밸런서의 리스너와 상태 확인을 구성했습니다. 
 그런 다음 로드 밸런서에 인스턴스를 등록하고 Classic Load Balancer의 작동을 확인했습니다. 
 이 과정이 약간이라도 도움이 되었기를 바라며 계속해서 다른 동영상을 학습하시기 바랍니다. 
 AWS 교육 및 자격증의 Seph였습니다. 
 시청해 주셔서 감사합니다. 
 
 - Auto Scaling
 안녕하십니까? 저는 AWS 교육 및 자격증 팀의 Andy Cummings라고 합니다. 
 Auto Scaling 소개에 오신 것을 환영합니다. 
 저는 AWS에 입사한지 이제 1년 반 되었고 현재는 북미 지역 AWS 고객을 대상으로 한 라이브 교육 이벤트를 담당하고 있습니다. 
 이 동영상에서는 Auto Scaling을 소개합니다. 
 서비스 개요와 가능한 사용 사례를 살펴본 다음, 서비스를 시연하면서 실제로 작동하는 모습을 살펴보도록 하겠습니다. 
 그러면 Auto Scaling이란 무엇입니까? Auto Scaling은 애플리케이션의 로드를 처리할 수 있는 적절한 수의 Amazon EC2 인스턴스를 유지하도록 해줍니다. 
 Auto Scaling을 사용하면 향후 특정 시점에서 워크로드 요구 사항을 충족하기 위해 몇 개의 EC2 인스턴스가 필요할지 추측할 필요가 없어집니다. 
 EC2 인스턴스에서 애플리케이션을 실행할 때 Amazon CloudWatch를 사용하여 워크로드의 성능을 모니터링하는 것이 매우 중요합니다. 
 하지만 CloudWatch 자체는 EC2 인스턴스를 추가하거나 제거할 수 없습니다. 
 여기서 Auto Scaling이 등장합니다. 
 예제 워크로드를 살펴봅시다. 
 CloudWatch를 사용하여 1주일간의 EC2 리소스 요구 사항을 측정할 것입니다. 
 리소스 요구 사항은 요일마다 변동하여 수요일에 가장 많은 용량이 필요하고 토요일에 가장 적은 용량이 필요합니다. 
 수요가 가장 많은 시기(이 경우에는 수요일)를 항상 충족하기 위해 충분 이상의 EC2 용량을 할당하는 전략을 취할 수 있습니다. 
 하지만 이는 일주일 중 대부분 활용되지 않는 리소스를 운영한다는 의미입니다. 
 이것은 하나의 선택지이지만 비용은 최적화되지 않습니다. 
 이와는 다르게, 더 적은 수의 EC2 인스턴스를 할당하여 비용을 줄일 수 있습니다. 
 이는 특정 요일에 용량 부족이 발생한다는 것을 의미합니다. 
 그리고 용량 문제를 해결하지 않는다면 애플리케이션 성능이 저하되거나 심지어 사용자에게 시간 초과가 발생할 수도 있습니다. 
 분명히 좋은 일은 아닙니다. 
 Auto Scaling을 사용하면 사용자가 지정하는 조건에 따라 EC2 인스턴스를 추가 또는 제거할 수 있습니다. 
 Auto Scaling은 성능 요구 사항이 유동적인 환경에서 특히 강력합니다. 
 이를 통해 성능을 유지하고 비용을 최소화할 수 있습니다. 
 실제로 Auto Scaling은 중요한 질문 두 가지에 답을 내놓습니다. 
 1) 어떻게 워크로드가 변동하는 성능 요구 사항을 충족하는 데 충분한 EC2 리소스를 확보할 수 있는가? 2) 어떻게 EC2 리소스 프로비저닝이 필요에 따라 이루어지도록 자동화할 수 있는가? Auto Scaling은 환경을 확장 가능하게 만들고 최대한 자동화한다는  두 가지의 AWS 모범 사례를 충족합니다. 
 서비스를 좀 더 자세히 살펴보겠습니다. 
 그러면 조정이란 정확히 어떤 의미입니까? 우리는 먼저 확장 및 축소의 개념을 정의해야 합니다. 
 Auto Scaling은 사용자가 정의하는 조건(예: CPU 사용률 80% 초과)에 따라 또는 일정에 따라 워크로드에서 실행되는 EC2 인스턴스 수를 자동으로 조정할 수 있습니다. 
 Auto Scaling이 인스턴스를 추가할 경우 이를 확장이라고 합니다. 
 Auto Scaling이 인스턴스를 종료할 경우 이를 축소하고 합니다. 
 사용자가 이러한 이벤트의 시작을 제어한다는 점을 기억하십시오. 
 그렇다면 어떻게 자동으로 조정됩니까? 자동 조정에는 세 가지 구성 요소가 필요합니다. 
 첫째, 시작 구성을 생성합니다. 
 둘째, Auto Scaling 그룹을 생성합니다. 
 그리고 마지막으로 Auto Scaling 정책을 하나 이상 정의합니다. 
 그럼 각 구성 요소의 역할을 보다 자세히 살펴보겠습니다. 
 시작 구성이란 무엇입니까? 이것은 Auto Scaling이 시작할 인스턴스를 정의합니다. 
 사용할 Amazon 머신 이미지, 인스턴스 유 형, 인스턴스에 적용할 보안 그룹 또는 역할 등 콘솔에서 EC2 인스턴스를 시작할 때 지정해야 할 모든 것을 생각하면 될 것입니다. 
 Auto Scaling 그룹이란 무엇입니까? 이것은 배포가 이루어지는 위치와 배포에 대한 제한을 정의하는 것입니다. 
 여기서 어느 VPC가 인스턴스를 배포할지, 어느 로드 밸런서에서 상호 작용할지를 정의합니다. 
 또한 그룹에 대한 제한도 지정합니다. 
 최소 개수를 2로 설정할 경우 서버가 2개 미만으로 감소할 경우 다른 인스턴스가 시작되어 이를 대체합니다. 
 최대 개수를 8로 설정할 경우 그룹 내 인스턴스 수가 절대로 8개를 넘지 않습니다. 
 희망 용량은 처음에 시작할 인스턴스 수입니다. 
 Auto Scaling 정책이란 무엇입니까? 이것은 언제 EC2 인스턴스를 시작 또는 종료할지를 지정하는 것입니다. 
 Auto Scaling을 예를 들어 매주 수요일 오후 3시 정각으로 예약하거나 인스턴스 추가 또는 제거할 임계값을 정의하는 조건을 생성할 수 있습니다. 
 조건 기반 정책은 Auto Scaling을 동적으로 만들어 유동적인 요구 사항을 충족할 수 있습니다. 
 확장 및 축소 각각 하나 이상의 Auto Scaling 정책을 생성하는 것이 모범 사례입니다. 
 동적 Auto Scaling은 어떻게 작용할까요? 일반적인 구성 한 가지는 EC2 인스턴스 또는 로드 밸런서로부터의 성능 정보를 기반으로 CloudWatch 경보를 생성하는 것입니다. 
 성능 임계값이 위반되면 CloudWatch 경보가 환경 내 EC2 인스턴스를 확장 또는 축소하는 Auto Scaling 이벤트를 트리거합니다. 
 CloudWatch 경보 예제를 살펴보겠습니다. 
 경보의 첫 번째 부분은 임계값을 포함한 조건입니다. 
 이 경우, CPU 사용률 80% 초과입니다. 
 기간을 지정할 수도 있습니다. 
 예를 들어 CPU 사용률이 5분 연속 80%를 상회할 경우 경보가 트리거되도록 지정할 수 있습니다. 
 기간은 중요합니다. 
 프로세서 사용률이 30초 동안 급증했다고 Auto Scaling이 새 인스턴스를 추가할 필요는 없을 테니까요. 
 경보의 두 번째 부분은 경보가 트리거된 후 수행할 조치입니다. 
 Auto Scaling에서는 조치가 인스턴스를 추가 또는 제거하는 것입니다. 
 그러므로 이 경우, CPU가 1회의 기간(기본적으로 5분) 동안 80%를 초과하면 Auto Scaling이 Auto Scaling 그룹에 새 인스턴스 2개를 추가합니다. 
 더 많은 인스턴스를 추가할수록 CPU 사용률은 감소할 것입니다. 
 언제 Auto Scaling 그룹에서 인스턴스를 종료할지 정의하기 위해 다른 CloudWatch 경보를 설정해야 합니다. 
 예를 들어 CPU 사용률이 5분 연속으로 20%를 하회할 경우 인스턴스 하나를 종료합니다. 
 이 모든 것의 장점은 Auto Scaling이 동적으로 워크로드를 관리하므로 사용자는 다른 문제에 집중할 수 있다는 것입니다. 
 이제 간략한 데모를 통해 Auto Scaling이 어떻게 작동하는지 직접 보도록 하겠습니다. 
 기본 시작, Auto Scaling 그룹, Auto Scaling 정책을 생성한 다음, Auto Scaling을 트리거하여 어떻게 작동하는지 보는 것으로 마무리하겠습니다. 
 먼저 EC2 서비스를 개설합니다. 
 세 가지 구성 요소를 기억하시죠? 이제 시작 구성, Auto Scaling 그룹, 하나 이상의 Auto Scaling 정책을 빌드해야 합니다. 
 왼쪽 창에서 Auto Scaling 섹션으로 스크롤하여 Auto Scaling Groups를 선택합니다. 
 Create Auto Scaling group을 클릭합니다. 
 그런 다음 시작 구성을 생성하도록 선택합니다. 
 이미 EC2 인스턴스를 시작했다면 무엇을 선택해야 할지 알 것입니다. 
 Amazon AMI를 선택한 다음, 대용량 인스턴스 유형인 M을 선택합니다. 
 이제 시작 구성에 이름을 지정합니다. 
 Linux M4로 명명하겠습니다. 
 스토리지 및 보안 그룹은 기본 설정을 그대로 사용합니다. 
 구성을 검토한 후 Launch, Launch Configuration을 차례로 클릭합니다. 
 이제 기존 키 페어를 선택하고 시작 구성을 생성합니다. 
 그러면 Auto Scaling 그룹의 속성으로 바로 이동합니다. 
 여기에 이름을 지정합니다. 
 Sales App으로 명명하겠습니다. 
 방금 빌드한 시작 구성이 사용되는 것이 보일 것입니다. 
 인스턴스 2개부터 시작하도록 지정한 다음 실제로 인스턴스를 배포할 VPC와 서브넷을 지정합니다. 
 그런 다음 2가지 조정 정책을 구성합니다. 
 이 그룹의 용량을 조정하는 조정 정책을 사용하도록 선택하고 인스턴스를 2개에서 8개 사이로 조정하도록 설정합니다. 
 여기 이것이 최대값과 최소값이 됩니다. 
 또한 지표의 목표 값을 설정할 수 있는 간단한 목표 추적 정책을 사용합니다. 
 여기서는 평균 CPU 활용률 60%를 지정하겠습니다. 
 그러면 목표 추적 정책이 목표 값을 충족하기 위해 자동으로 인스턴스를 시작 또는 종료합니다. 
 확장 및 축소를 위한 개별 정책을 생성할 수도 있지만, 목표 추적이 Auto Scaling 정책을 시작하는 가장 간단한 방법입니다. 
 어디에 알림과 태그를 추가할 수 있는지 검토하고 Auto Scaling 그룹을 생성하도록 선택합니다. 
 이제 Auto Scaling 그룹을 확인합니다. 
 한 번에 모두 보이도록 약간 좁히겠습니다. 
 최소 인스턴스 개수가 2, 최대 인스턴스 개수가 8로 설정된 것이 보일 것입니다. 
 Instances 탭으로 이동하면 두 개의 인스턴스가 현재 보류 상태인 것을 알 수 있습니다. 
 이들은 신규 인스턴스이며, 이들이 존재하는 이유는 이전에 없었기 때문입니다. 
 앞서 최소 개수를 2로 설정했으므로 Auto Scaling이 자동으로 두 개의 인스턴스를 여기에 시작한 것입니다. 
 이제 Auto Scaling을 즉시 트리거하기 위해 수동으로 최소 그룹 크기를 늘리겠습니다. 
 Details 탭을 클릭하고 Edit를 선택하여 최소 인스턴스 개수와 원하는 구성을 변경합니다. 
 이제 4를 설정하겠습니다. 
 이제 최소 인스턴스 개수는 2가 아니라 4가 되어야 합니다. 
 이미 두 개의 인스턴스가 시작되었으므로 이제 추가로 두 개가 시작되는 것이 보일 것입니다. 
 Instances 탭으로 돌아가 내용을 살펴보겠습니다. 
 보시다시피 시작 구성에 따라 자동으로 두 개의 인스턴스가 추가로 시작되었습니다. 
 이제 학습한 내용을 요약해 보겠습니다. 
 Auto Scaling을 사용하면 사용자가 지정하는 조건에 따라 EC2 인스턴스를 추가 또는 제거할 수 있습니다. 
 Auto Scaling은 성능 요구 사항이 유동적인 환경에서 특히 강력합니다. 
 이를 통해 성능을 유지하고 비용을 최소화할 수 있습니다. 
 무엇보다 이 프로세스는 사용자가 자고 있는 자정에 EC2 인스턴스를 축소 또는 확장할 수 있습니다. 
 필요한 세 가지 핵심 구성 요소는 시작 구성(무엇을 배포할 것인가), Auto Scaling 그룹(어디에 배포할 것인가), Auto Scaling 정책(언제 배포할 것인가)입니다. 
 여러분이 배운 모든 AWS 서비스는 또 다른 솔루션 빌드 도구임을 명심하십시오. 
 여러분이 활용할 수 있는 도구가 많아질수록 여러분의 역량도 강해집니다. 
 시청해 주셔서 감사합니다. 
 

 - Amazon Elastic Block Store (EBS)
 Amazon Elastic Block Store(EBS) 소개 동영상에 오신 것을 환영합니다. 
 저는 AWS 교육 및 자격증 팀의 Rafael Lopes입니다. 
 팀의 일원으로 저는 이러한 전용 교육 콘텐츠를 개발하고 제공하는 일을 담당해 왔습니다. 
 이 간략한 동영상에서는 시연을 통해 Amazon EBS 서비스를 소개할 것입니다. 
 그럼 시작하겠습니다. 
 EBS 볼륨은 Amazon EC2 인스턴스의 저장 단위로 사용할 수 있습니다. 
 따라서 AWS에서 실행되는 인스턴스에 디스크 공간이 필요하다고 생각되면 언제나 EBS 볼륨 사용을 고려할 수 있습니다. 
 EBS 볼륨은 하드 디스크나 SSD 디바이스일 수 있으며 사용한 만큼 지불하면 되기 때문에, 볼륨이 더 이상 필요하지 않을 경우 삭제하여 결제를 중지할 수 있습니다. 
 EBS 볼륨은 내구성과 가용성을 위주로 설계됩니다. 
 이는 볼륨에 있는 데이터가 가용 영역(AZ)에서 실행되는 복수의 서버에 걸쳐 자동으로 복제됨을 의미합니다. 
 EBS 볼륨과 하드 디스크 또는 SSD와 같은 물리적 미디어 디바이스를 비교했는데, 블록 수준 복제 때문에 실제로는 EBS 볼륨의 내구성이 훨씬 더 뛰어납니다. 
 EBS 볼륨을 생성할 때 필요에 가장 적합한 스토리지 유형을 선택할 수 있습니다. 
 성능 및 비용 요건에 따라 하드 디스크와 SSD 간에 선택할 수 있습니다. 
 이 모든 것은 적합한 작업에 적합한 도구를 선택하는 문제에 관한 것입니다. 
 예를 들어 데이터베이스 인스턴스를 실행하는 경우 데이터의 이차 볼륨을 사용하도록 데이터베이스를 구성할 수 있습니다. 
 이 경우 운영 체제에 할당된 볼륨보다 더 빠른 성능을 발휘할 수 있습니다. 
 혹은 로그에 대해서는 비용이 더 저렴한 마그네틱 볼륨을 할당할 수 있습니다. 
 Amazon EBS를 사용하면 볼륨의 시점별 스냅샷을 생성하여 한층 더 높은 수준의 데이터 내구성을 구현할 수 있으며, AWS를 통해 어느 때고 스냅샷으로부터 새로운 볼륨을 다시 생성할 수 있습니다. 
 스냅샷을 공유하거나 다른 AWS 리전에 복사할 수 있으므로 재해 복구(DR) 성능이 한층 더 높아집니다. 
 예를 들어 스냅샷을 암호화하여 버지니아에서 도쿄까지 공유할 수 있습니다. 
 또한 추가 비용 없이 EBS 볼륨을 암호화할 수도 있습니다. 
 EC2 측에서 암호화가 이루어지기 때문에 EC2 인스턴스와 AWS 데이터 센터 내부의 EBS 볼륨 간에 이동하는 데이터가 전송 중에 암호화됩니다. 
 회사가 성장함에 따라 EBS에 저장되는 데이터의 양도 증가할 가능성이 높을 것입니다. 
 EBS 볼륨은 용량 증가와 여러 유형 간의 변환이 가능하기 때문에, 하드 디스크에서 SSD로 변경하거나 용량을 50기가바이트에서 16테라바이트로 증설할 수 있습니다. 
 예를 들어 인스턴스를 중단할 필요 없이 바로 운영 규모를 조정할 수 있습니다. 
 자, 그럼 시연을 통해 새 볼륨을 생성하여 EC2 인스턴스에 연결하는 것이 얼마나 빠르고 쉬운지 보여 드리겠습니다. 
 AWS Management Console의 EC2 콘솔에서 EC2 인스턴스와 EBS 볼륨을 확인할 수 있는데, Compute 탭에서 EC2의 여기를 클릭하여 찾을 수 있습니다. 
 인스턴스의 여기를 클릭하면 많은 인스턴스가 실행되고 있음을 확인할 수 있습니다. 
 볼륨은 Elastic Block Store(EBS) 볼륨 아래에 있는 Volumes의 사이드바에 위치합니다. 
 이들 볼륨이 나의 계정에 있는 볼륨입니다. 
 새 볼륨을 생성하거나 새 볼륨을 인스턴스에 연결하려면(이 경우 저는 Linux 인스턴스에 연결할 것입니다), EBS 볼륨을 인스턴스가 상주하는 곳과 동일한 가용 영역에 생성해야 합니다. 
 따라서 볼륨을 생성할 때 이 인스턴스가 US East One B에 있을 경우 US East One B에도 볼륨을 생성할 필요가 있습니다. 
 그러면 그렇게 해보겠습니다. 
 여기 Volumes에서 Create Volume을 클릭합니다. 
 여기에서 첫 번째로 지정할 것은 US East One B 가용 영역입니다. 
 이 EBS 볼륨을 US East One B에서 실행되는 인스턴스에 연결할 것이기 때문입니다. 
 이제 하드 디스크 또는 SSD와 같은 볼륨 유형을 지정할 수 있습니다. 
 구축하고자 하는 범용 SSD는 기가바이트 단위로만 요금이 부과됩니다. 
 크기가 25바이트인 볼륨을 생성하고자 하는 경우 여기에서는 25기가바이트를 지정해야 합니다. 
 이것이 스냅샷을 볼륨에 복원하는 방법인데, 이 경우에는 그렇게 하지 않을 것입니다. 
 그런 다음 Create Volume을 클릭합니다. 
 이는 제가 생성했던 볼륨 ID입니다. 
 Close를 클릭하면 이들 볼륨을 생성일, 볼륨 유형 및 크기별로 분류할 수 있는 옵션이 나타납니다. 
 이 볼륨이 좀 전에 생성한 볼륨이고, 25기가바이트이고, 볼륨 유형이 GP2이고, SSD라는 것을 확인할 수 있습니다. 
 이제 볼륨이 생성되었으니, 생성된 볼륨을 EC2 인스턴스에 연결하겠습니다. 
 Actions에서 여기를 클릭하여 볼륨을 연결한 다음 볼륨에 연결하고자 하는 인스턴스를 지정합니다. 
 이 경우는 Linux 인스턴스입니다. 
 그리고 디바이스입니다. /Dev/Std라고 하겠습니다. 연결합니다. 
이제 인스턴스 내부를 살펴보겠습니다. 
 Instances에서 여기를 클릭하고, Linux를 선택하고, Connect에서 여기를 클릭하고, SSH 명령을 복사함으로써 이를 수행할 수 있습니다. 
 Linux 인스턴스이고 MacOS를 사용하기 때문입니다. 
 여기에서 내 터미널로 돌아가 SSH 명령을 실행할 수 있습니다. 
 그래서 SSH 명령을 복사하여 내 터미널에 붙여넣습니다. 
 이제 내 EC2 인스턴스에 연결되었습니다. 
 lsblk 명령을 실행하면 이 인스턴스에 연결한 블록 스토리지 디바이스를 확인할 수 있습니다. 
 여기에서 /Dev/xvdb 볼륨이 STB와 동일한 유형의 25기가바이트 디스크라는 것을 명확히 알 수 있습니다. 
 이제 연결된 이 EBS 볼륨으로 파일 시스템을 생성하고  /dev/xvdb 명령을 실행할 수 있습니다. 
 루트로서 실행되어야 합니다. 
 그러면 Linux 운영 체제가 이제 이 볼륨에 파일 시스템을 생성하게 됩니다. 
 LSBLK를 다시 실행하면 아무런 변화도 일어나지 않지만, 이제 해당 볼륨을 내 Linux 시스템에 있는 폴더에 탑재할 수 있습니다. 
 만약 Windows 시스템이었다면 디스크 관리자로 가서 파일 시스템을 생성한 다음에야 거기에서 탑재할 수 있을 것입니다. 
 Linux 시스템에서의 탑재 방법은 다음과 같습니다. 
 mount 명령을 실행합니다. 
 탑재 지점은 해당 볼륨을 탑재하고자 하는 폴더에 있는 xvdb입니다. 
 루트만 이를 수행할 수 있기 때문에, 루트 허가로 이를 수행합니다. 
 이제 볼륨이 /mnt 폴더에 탑재됩니다. 
 /mnt 폴더에 우리 파일 시스템이 있습니다. 
 따라서 파일, 디렉터리, 심볼 링크 그리고 스토리지 블록 디바이스로 가능한 모든 것을 생성할 수 있습니다. 
 이는 텍스트 파일입니다. 
 LS 명령을 실행하면 이제 그곳에서 내 파일을 확인할 수 있습니다. 
 디렉터리를 생성할 수 있습니다. 
 파일을 해당 디렉터리로 옮길 수 있습니다. 
 LS를 실행하면 폴더가 생성됩니다. 
 그 폴더에 들어가면 내 파일이 안에 있습니다. 
 EBS 볼륨을 생성하여 EC2 인스턴스에 연결하고 형식을 지정하는 것이 얼마나 쉬운지 알 수 있을 것입니다. 
 언제든지 여기로 돌아와 mount 명령을 사용하여 볼륨을 폴더에 탑재한 다음 AWS Management Console로 다시 돌아가 Volumes를 클릭한 후 내 볼륨을 선택하고 내 인스턴스에서 이 볼륨을 분리할 수 있습니다. 
 볼륨이 분리된 경우 가용 상태를 유지할 것입니다. 
 이 볼륨이 지금 사용 중인 것을 알 수 있는데, 실제로 내 인스턴스에서 사용하고 있기 때문입니다. 
 이 볼륨이 가용하기 때문에, 해당 볼륨을 분리하고 동일한 가용 영역에 있는 또 다른 EC2 인스턴스에 연결할 수 있습니다. 
 이 경우는 US East One B입니다. 
 이 볼륨에 태그를 지정할 수도 있습니다. 
 이 볼륨이 데이터베이스에 의해 사용되고 있는 경우 “database volume”이라는 태그 값을 지정하면 됩니다. 
 이제 이 볼륨은 데이터베이스 볼륨입니다. 
 AWS 리소스에 태그를 지정할 때마다 태그당 과금을 분석하여 EC2 인스턴스, EBS 스냅샷 그리고 태그를 지원하는 모든 것의 경우와 동일한 방법으로 특정 기간 내에서 해당 태그 키 이름 및 태그 값 “database volume”을 지닌 볼륨 전체의 비용이 얼마인지 확인할 수 있기 때문에 태그는 매우 중요합니다. 
 아주 간단합니다. 
 요약하자면, EBS 볼륨이 무엇인지 살펴보았고 EBS 볼륨 하나를 생성하여 Linux EC2 인스턴스에 연결하는 방법을 시연을 통해 알아보았습니다. 
 여러분이 조금이나마 배웠고 앞으로도 동영상 강좌를 계속 탐구하시기를 바랍니다. 
 AWS 교육 및 자격증 팀의 Rafael Lopes였습니다. 
 시청해 주셔서 감사합니다. 
 
 - Amazon Simple Storage Service (S3)
 Amazon Simple Storage Service(Amazon S3) 동영상 강좌에 오신 것을 환영합니다. 
 저는 Heiwad Osman이라고 하며 AWS 기술 강사입니다. 
 Amazon S3를 소개하고, 일반 사용 사례를 다루어 보고, 시연을 통해 S3의 실제 작동 모습을 살펴볼 예정입니다. 
 그럼 시작하겠습니다. 
 Amazon S3는 데이터 저장 및 검색을 위한 간단한 API를 제공해 주는 완전관리형 스토리지 서비스입니다. 
 이는 S3에 저장하는 데이터는 임의의 특정 서버와 연계되어 있지 않기 때문에 고객이 직접 인프라를 관리할 필요가 없다는 의미입니다. 
 원하는 만큼 많은 객체를 S3에 저장할 수 있습니다. 
 S3는 수조 개의 객체를 저장하며 정기적으로 최대 초당 수백만 건의 요청을 처리합니다. 
 객체는 이미지, 동영상, 서버 로그 등 거의 모든 유형의 데이터 파일이 될 수 있습니다. 
 S3가 크기가 수 테라바이트인 객체까지 지원하기 때문에 데이터베이스 스냅샷도 객체처럼 저장할 수 있습니다. 
 또한 Amazon S3는 인터넷(HTTP 또는 HTTPS)을 통한 데이터 액세스 지연 시간이 짧기 때문에 언제 어디서든 데이터를 검색할 수 있습니다. 
 가상 사설 클라우드 엔드포인트를 통해 S3에 비공개적으로 액세스할 수 있습니다. 
 ID 및 액세스 관리 정책, S3 버킷 정책, 객체별 액세스 제어 목록을 사용하여 데이터 액세스 가능자를 정밀하게 관리할 수 있습니다. 
 기본적으로 데이터는 공개적으로 공유되지 않습니다. 
 데이터를 전송 중에 암호화하고 객체에 대한 서버 측 암호화를 활성화할 수도 있습니다. 
 저장하고자 하는 파일을 선택하겠습니다. 
 이 소개 동영상으로 해보겠습니다. 
 먼저 파일을 저장할 곳이 필요합니다. 
 S3에서는 데이터를 저장할 버킷을 생성할 수 있습니다. 
 이 동영상을 버킷에 객체로 저장하고자 하는 경우 나중에 객체를 검색할 때 사용할 수 있는 문자열인 키를 지정해야 합니다. 
 일반적으로 파일 경로와 비슷한 방식으로 이들 문자열을 설정합니다. 
 우리가 선택한 동영상을 해당 키를 사용하여 S3에 객체로 저장하겠습니다. 
 버킷을 S3에 생성할 때 특정 AWS 리전과 연계됩니다. 
 버킷에 데이터를 저장할 때마다 선택한 리전 내에 있는 복수의 AWS 시설에 중복 저장됩니다. 
 S3 서비스는 두 AWS 시설에 있는 데이터가 동시에 훼손되는 경우에도 데이터가 안전하게 저장되도록 설계되어 있습니다. 
 S3는 데이터가 증가하는 경우에도 여러분의 버킷을 벗어나는 스토리지까지 자동으로 관리합니다. 
 이러한 기능 덕분에 현재 상황에 맞춰 시작하고 애플리케이션 수요에 따라 데이터 스토리지를 증설할 수 있습니다. 
 또한 S3는 확장/축소가 가능하기 때문에 대용량의 볼륨 요청도 처리할 수 있습니다. 
 스토리지나 처리량을 직접 프로비저닝할 필요 없이 사용한 만큼만 요금을 지불하면 됩니다. 
 관리 콘솔, AWS CLI 또는 AWS SDK를 통해 S3에 액세스할 수 있습니다. 
 REST 엔드포인트를 통해 버킷에서 직접 데이터에 액세스할 수도 있습니다. 
 HTTP 또는 HTTPS 액세스를 지원합니다. 
 선택된 리전과 객체를 저장할 때 사용한 키에 대해 버킷의 S3 엔드포인트로부터 구축한 객체의 URL 예를 여기에서 확인할 수 있습니다. 
 이와 같은 유형의 URL 기반 액세스를 지원하기 위해서는 S3 버킷 이름이 전 세계적으로 고유해야 하며 DNS를 준수해야 합니다. 
 또한 객체 키가 URL에 대해 안전한 문자를 사용해야 합니다. 
 사실상 데이터를 무제한 저장하고 어디서든 데이터에 액세스할 수 있는 이러한 유연성 덕분에 S3 서비스는 다양한 시나리오에 적합합니다. 
 S3의 몇 가지 사용 사례를 살펴보겠습니다. 
 S3 버킷은 EC2나 전통적인 서버에 있는 애플리케이션을 포함하여 임의의 애플리케이션 인스턴스가 액세스할 수 있는 객체를 저장하기 위한 공유 장소를 임의의 애플리케이션 데이터를 위한 장소로 제공합니다. 
 이는 애플리케이션이 공통 위치에 저장해야 하는 사용자 생성 미디어 파일, 서버 로그 또는 기타 파일에 유용할 수 있습니다. 
 또한 콘텐츠를 웹을 통해 직접 가져올 수 있기 때문에 해당 콘텐츠를 애플리케이션으로부터 오프로드하고 고객이 직접 S3로부터 데이터를 가져오도록 할 수 있습니다. 
 정적 웹 호스팅의 경우 S3 버킷이 HTML, CSS, 자바스크립트 및 기타 파일을 포함하여 웹 사이트의 정적 콘텐츠를 제공할 수 있습니다. 
 높은 내구성 덕분에 S3는 데이터 백업을 저장하기에 좋은 대안입니다. 
 한 리전의 S3 버킷에 저장되는 데이터가 또 다른 S3 리전에 자동으로 복제될 수 있도록 리전 간 교차 복제가 가능하게 S3를 구성하여 가용성 및 재해 복구 성능을 더욱 높일 수 있습니다. 
 S3는 스토리지와 성능이 조정 가능하기 때문에 다양한 빅 데이터 도구를 사용하여 분석하고자 하는 데이터의 스테이징 또는 장기 저장에 적합합니다. 
 예를 들어 S3에 있는 데이터 스테이지를 Redshift에 로드하거나, EMR에서 처리하거나, 심지어 Amazon Athena와 같은 도구를 사용하여 그 자리에서 쿼리할 수도 있습니다. 
 또한 Snowball과 같은 AWS Import/Export 디바이스를 사용하여 대용량의 데이터를 S3로 가져오거나 S3에서 내보낼 수도 있습니다. 
 S3로 데이터를 간단하게 저장하고 액세스할 수 있기 때문에 앞으로 AWS 서비스와 함께 그리고 애플리케이션의 다른 부분에 자주 사용하게 될 것입니다. 
 S3의 기능과 일반 사용 사례를 살펴봤으니, 이제 AWS에 애플리케이션을 빌드할 때 S3를 효과적으로 사용하는 방법을 찾을 수 있을 것입니다. 
 이제 실제로 S3를 시연해 보겠습니다. 
 지금 우리는 AWS Management Console의 Amazon S3 섹션에 있으며, 여러 버킷의 목록을 확인할 수 있습니다. 
 이 섹션에서는 계속 진행하여 새 버킷을 생성한 다음 몇몇 데이터를 추가하고 추가한 데이터를 검색할 예정입니다. 
 그럼 계속 진행하여 Create Bucket을 클릭합니다. 
 여기 버킷 이름과 리전을 설정하라는 메시지가 나타납니다. 
 버킷 이름은 DNS를 준수해야 합니다. 
 이제 Amazing Bucket 1의 이름을 정한 다음 리전을 설정하겠습니다. 
 제 경우는 이 데이터에 액세스해야 할 애플리케이션이 EC2 인스턴스에서 실행되며 인스턴스의 EC2 세트가 Oregon 리전에 있습니다. 
 따라서 리전을 US West Oregon으로 설정할 것입니다. 
 이 시점에서 버킷 생성에 필요한 모든 결정을 한 셈입니다. 
 이 마법사의 다른 단계에서는 버킷의 버전을 관리하고 기본 권한을 변경하여 이 버킷에 대한 액세스 권한을 공개 인터넷 사용자나 특정 AWS 사용자에게 부여하는 작업을 해 보겠습니다. 
 이 경우는 기본 설정을 사용할 것이기 때문에 계속 진행하여 Create를 클릭합니다. 
 이제 버킷이 생성되었음을 확인할 수 있습니다. 
 버킷 이름. 
 계속하여 해당 버킷을 클릭합니다.은 Amazing Bucket 1입니다 
 버킷이 비어 있다는 메시지가 나타나면, 새 객체를 업로드할 수 있습니다. 
 이 버킷에 대한 속성과 권한이 무엇인지 알 수 있지만, 계속하여 Upload를 클릭하겠습니다. 
 저는 관리 콘솔에서 파일을 끌어서 놓고 파일에 대한 권한을 수정할 수 있음을 알고 있지만, AWS CLI를 사용하여 데이터를 업로드하겠습니다. 
 여기에서는 터미널 윈도우를 열고 이 터미널 윈도우에서 데이터를 확인할 수 있습니다. 
 현재 Assets라고 하는 폴더에 있으며 demo.txt라는 이름의 파일이 안에 있습니다. 
 이 파일을 간략하게 살펴보면 텍스트 파일임을 알 수 있습니다. 
 이제 나중에 내 EC2 인스턴스에서 액세스할 수 있도록 이 파일을 내 S3 버킷에 복사할 것입니다. 
 계속하여 S3 복사 명령을 사용하여 demo.txt를 Amazing Bucket 1에 상주하는 hello.txt 키 아래에 있는 객체에 복사할 것입니다. 
 이로써 데이터를 업로드했습니다. 
 폴더에 있는 콘텐츠를 내 로컬 시스템에 가져오고, 동기화 명령을 사용하여 동기화할 수도 있습니다. 
 그러면 CLI가 파일 각각을 처리하고 버킷에 존재하는지 여부를 확인하고 존재하지 않는 경우 계속하여 업로드할 것입니다. 
 이제 code.zip과 random. 
csv도 내 버킷에 업로드했습니다. 
 계속하여 SSH를 EC2 인스턴스에 사용하는 경우 내 계정에 있는 임의의 S3 버킷을 읽을 수 있는 액세스 권한을 부여하는 IAM 역할과 함께 이 인스턴스가 프로비저닝되었음을 확인할 수 있습니다. 
 그럼 계속하여 EC2 인스턴스로부터 어떤 콘텐츠가 S3 Amazing Bucket 1에 있는지 확인합니다. 
 계속하여 S3 Amazing Bucket 1에서 AWS S3 ls를 수행합니다. 
 반복하도록 설정하여 모든 경로를 확인할 것입니다. 
 이러한 파일이 3개 있음을 알 수 있습니다. 
 앞에서와 같이 복사 명령을 사용할 수 있지만, 지금은 먼저 버킷 이름을 지정함으로써 역순으로 수행합니다. 
 앞서 제 버킷에서 hello.txt를 복사했습니다. 
 계속하여 로컬 EC2 인스턴스 스토리지에서 ls를 수행합니다. 
 hello.txt를 확인할 수 있습니다. 
 cat를 실행하면 파일을 가져올 수 있으며, 다운로드한 텍스트 파일이 많다는 것을 알 수 있습니다. 
 동기화 명령을 역순으로 실행할 수도 있습니다. 
 이제 amazing-bucket-1/files의 내용을 내 EC2 인스턴스에 있는 로컬 폴더로 동기화할 수 있습니다. 
 폴더가 하나 생성되었음을 확인할 수 있습니다. 
 폴더의 내용물은 code.zip 및 random.csv 파일 두 개입니다. 
 지금까지 데이터를 저장하고 다시 가져오는 내용의 S3 시작하기를 간략하게 살펴보았습니다. 
 다시 관리 콘솔로 돌아가서 정리해 보겠습니다. 
 이제 제 S3 버킷에 몇몇 파일이 있다는 것을 확인할 수 있습니다. 
 이 파일들은 관리 콘솔 및 AWS CLI에서 봤던 것과 동일한 파일입니다. 
 계속하여 hello.txt를 클릭하면 몇 가지 옵션이 나타납니다. 
 여기에서 객체 기준으로 속성과 권한을 변경할 수 있습니다. 
 이 파일의 속성 중 일부도 확인할 수 있습니다. 
 이제 정말로 S3 서비스 시작하기를 전부 다루어 본 것 같습니다. 
 이 동영상에서 S3 소개와 몇 가지 일반 사용 사례를 살펴봤습니다. 
 그리고 시연을 통해 버킷을 생성하고, 파일을 생성된 버킷에 복사한 다음, EC2 인스턴스로부터 이들 파일을 다운로드해 봤습니다. 
 시청해 주셔서 감사합니다. 
 
 - Amazon Glacier
 안녕하십니까? 저는 Adam Becker입니다. 
 AWS에 몸담은 지 3개월째이며 기술 교육을 담당하고 있습니다. 
 팀의 일원으로 다수의 교육 세션에 기여했으며 강의도 많이 했습니다. 
 이 동영상에는 Amazon의 관리형 서비스인 Amazon Glacier를 다루어 보고, 사용 사례를 설명하고, 시연과 서비스 소개를 할 예정입니다. 
 그럼 동영상으로 들어가서 Amazon Glacier에 대해 배워 보겠습니다. 
 Amazon Glacier는 AWS에서 제공하는 스토리지 서비스 범주에 속합니다. 
 Amazon Glacier는 AWS의 데이터 보관 솔루션입니다. 
 목표는 최대한 비용 효율적이고 효과적으로 설계할 수 있도록 돕는 것이며, AWS가 그렇게 다양한 스토리지 서비스 솔루션을 제공하는 이유도 그 때문입니다. 
 Amazon Glacier는 AWS의 저비용 데이터 보관 솔루션입니다. 
 자주 액세스되지는 않지만 업무상 혹은 법적 이유로 반드시 보존해야 하는 콜드 데이터 보관용으로 설계되었습니다. 
 Amazon S3와 달리 Amazon Glacier는 빈번하게 액세스되는 데이터 저장용으로 설계되지 않았습니다. 
 대신 데이터를 저비용으로 장기간 보관할 수 있도록 설계되었습니다. 
 그렇기 때문에 가끔씩 액세스되는 데이터를 보관하는 데 적합합니다. 
 Glacier는 데이터를 복수의 시설에, 각 시설에서도 여러 디바이스에 다중 저장하기 때문에 평균적으로 연간 99.999999999%의 내구성을 발휘합니다. 
 이에 더해 아카이브를 저장하는 저장소에 대한 액세스 정책을 적용함으로써 Glacier에 저장되어 있는 데이터에 대한 액세스를 제어할 수 있습니다. 
 Amazon Glacier에는 세 가지 핵심 용어가 사용되는데, 알아 두는 편이 좋을 것입니다. 
 아카이브(archive)는 사진, 동영상 파일, 문서 등과 같이 Glacier에 저장하는 임의의 객체입니다. 
 아카이브는 Glacier에 있는 기본 스토리지 단위입니다. 
 각 아카이브는 자체 고유 ID와 선택하는 경우 설명도 부여할 수 있습니다. 
 저장소(vault)는 아카이브를 저장하는 컨테이너입니다. 
 저장소를 생성할 때는 저장소 이름과 저장소를 생성하고자 하는 AWS 리전을 지정합니다. 
 저장소 액세스 정책에서 저장소에 액세스할 수 있는 자와 할 수 없는 자, 사용자가 수행할 수 있는 작업과 수행할 수 없는 작업을 정합니다. 
 각 저장소에 대해 개별 저장소 액세스 정책을 수립하여 해당 저장소에 대한 액세스 권한을 관리할 수 있습니다. 
 또한 저장소 잠금 정책을 사용하여 저장소 변경을 방지할 수도 있습니다. 
 각 저장소마다 개별 액세스 정책과 그에 수반되는 저장소 잠금 정책을 가질 수 있습니다. 
 그렇다면 어떻게 Glacier에 데이터를 저장하고 액세스할 수 있겠습니까? AWS Management Console 내에서 Glacier에 액세스할 수 있는 반면, 저장소 생성 및 삭제나 아카이브 정책 생성 및 관리와 같은 몇 가지 작업만 이러한 방식이 가능합니다. 
 다른 작업의 경우 거의 모두 다른 솔루션이 필요합니다. 
 Glacier의 Java 또는 .NET용 REST API나 AWS SDK를 사용하여 AWS Command Line Interface(AWS CLI), 웹 또는 애플리케이션을 통해 Amazon Glacier와 상호 작용할 수 있습니다. 
 이 방법으로 보관하는 데이터는 Amazon S3를 포함하여 액세스할 수 있는 모든 곳에서 가져올 수 있습니다. 
 또한 수명 주기 정책을 사용하여 Amazon S3에서 Glacier로 데이터를 자동으로 보관할 수 있습니다. 
 이들 정책은 S3에서의 데이터 저장 기간, 데이터 저장 시 특정한 데이터 범위(예: 분기별 데이터 보관)과 같이 지정한 규칙을 바탕으로 Glacier에 데이터를 보관하게 됩니다. 
 Amazon S3의 버전 관리 기능을 활용하여 버전을 기준으로 데이터를 보관하는 수명 주기 정책을 설정할 수도 있습니다. 
 시간이 지남에 따라 데이터가 최종적으로 삭제되기 전에 Amazon S3로부터 Glacier로 데이터를 이전하는 수명 주기 정책의 한 예를 설명하겠습니다. 
 사용자가 애플리케이션에 동영상을 업로드하고 애플리케이션이 해당 동영상의 미리 보기 버전을 생성하는 상황을 가정해 보겠습니다. 
 이 동영상 미리 보기는 사용자가 바로 액세스할 가능성이 높기 때문에 Amazon S3 Standard에 저장됩니다. 
 하지만 대부분의 썸네일 미리 보기가 30일 후에는 전혀 액세스되지 않는다고 사용 데이터에 나타나는 경우 수명 주기 정책을 통해 30일 후에 해당 동영상이 S3 Standard Infrequent Access(SIA)로 자동으로 이전되도록 설정할 수 있을 것입니다. 
 따라서 30일이 더 경과한 후에는 미리 보기 파일도 액세스되는 일이 없을 것이므로 Amazon Glacier로 이전됩니다. 
 그러다가 1년에 도달하면 삭제됩니다. 
 극히 드문 예이기는 하지만 미리 보기가 다시 필요해진 경우에는 애플리케이션이 해당 파일이 삭제되었음을 확인하고 새 미리 보기 파일을 생성하게 됩니다. 
 여기에서 알아 두어야 할 중요한 점은 동영상 파일이 Amazon S3에 추가된 다음에는 수명 주기 정책이 이러한 파일 이동을 자동으로 처리하기 때문에 시간과 비용을 절약할 수 있다는 것입니다. 
 이제 복원에 대해 설명하겠습니다. 
 Glacier에 있는 데이터를 복원하고자 하는 경우에도 Amazon S3의 경우와 다르지 않습니다. 
 Glacier의 경우 데이터 검색은 밀리초가 아니라 분 및 시간 단위로 [측정]됩니다. 
 데이터 검색에는 액세스 횟수 및 비용이 각기 다른 세 가지 옵션이 있는데,  바로 대량, 표준 및 고속 검색입니다. 
 슬라이드에서 확인할 수 있듯이, 대량 검색은 비용이 가장 저렴한 솔루션으로 대개 5~12시간 정도 소요됩니다. 
 표준 검색은 대량 검색보다는 비용이 저렴하지만 고속 검색보다는 비싸며, 일반적으로 3~5시간 정도 소요됩니다. 
 고속 검색은 셋 중에 비용이 가장 비쌉니다. 
 하지만 고속 검색의 경우 일반적으로 1~5분 이내에 검색이 완료됩니다. 
 이를 패키지 제공 속도를 선택하는 것으로 생각하고, 워크로드에 가장 비용 효율적인 검색 속도를 정하면 됩니다. 
 Amazon S3와 Amazon Glacier 둘 다 데이터를 무제한 저장할 수 있는 객체 스토리지 솔루션이지만, 이 차트에서 알 수 있듯이 둘 간에는 몇 가지 중대한 차이가 존재합니다. 
 어떤 스토리지 솔루션이 필요에 가장 적합한지 결정할 때는 신중을 기하십시오. 
 사실 이 둘은 스토리지 필요에 따라 크게 다른 서비스입니다. 
 Amazon S3는 짧은 지연 시간으로 빈번하게 데이터에 액세스하는 용도로 설계된 반면, Glacier는 자주 액세스하지 않는 데이터를 저비용으로 장기간 보관하는 용도로 설계되어 있습니다. 
 S3의 최대 항목 크기는 5TB입니다. 
 반면에 Glacier는 최대 40TB까지 저장할 수 있습니다. 
 Amazon S3의 경우 데이터 액세스 속도가 빠른 만큼 기가바이트당 저장 비용은 Glacier보다 더 높습니다. 
 또한 S3와 Glacier 둘 다 요청당 과금 체제이지만, S3는 PUT, COPY, POST, LIST 및 GET 요청에 대해 과금하는 반면 Glacier는 업로드 및 검색 요청에 대해서만 과금합니다. 
 유의해야 할 또 다른 점은 Glacier의 경우 자주 액세스하지 않는 데이터를 위해 설계되어 요청 비용이 높고 검색하는 데이터에 대해 더 많은 기가바이트당 요금이 과금되기 때문에 S3에 비해 검색당 요금이 더 높습니다. 
 S3와 Glacier 간의 또 다른 중요한 차이점은 데이터 암호화 방식입니다. 
 두 솔루션 모두 HTTPS를 통해 데이터를 안전하게 저장할 수 있지만, Glacier의 경우 그곳에 있는 모든 데이터 아카이브가 기본적으로 암호화됩니다. 
 그에 반해 S3의 경우 애플리케이션이 서버 측 암호화를 개시해야 합니다. 
 기본적으로 사용자 본인만 자신의 데이터에 액세스할 수 있습니다. 
 그리고 AWS Identity and Access Management(IAM)를 사용하여 Amazon Glacier에 있는 데이터에 대한 액세스를 활성화하고 제어할 수 있습니다. 
 간단히 사용자를 지정하는 AWS IAM 정책을 설정하기만 하면 됩니다. 
 Amazon Glacier는 사용자를 대신하여 주요한 관리 및 보호 기능을 처리하지만, 직접 키를 관리해야 하는 경우에는 Glacier에 업로드하기 전에 데이터를 암호화할 수 있습니다. 
 이제 시연을 해 보겠습니다. 
 시연하는 동안 AWS의 UI가 어떤 모습인지 살펴보시기 바랍니다. 
 AWS Management Console에서 시연을 시작하겠습니다. 
 스토리지 영역에 집중하시기 바랍니다. 
 S3, Elastic File Store(Amazon EFS), Glacier 및 AWS Storage Gateway가 있을 것입니다. 
 이 시연에서는 Glacier를 선택할 것입니다. 
 그러면 스플래시 페이지가 나타납니다. 
 저장소를 생성하겠습니다. 
 Create Vault를 클릭하면 마법사가 시작됩니다. 
 마법사를 사용하면 아주 간단하게 옵션을 선택하고 최대한 빨리 저장소를 생성할 수 있습니다. 
 이 경우 제 리전을 사전에 선택했는데, 북부 버지니아입니다. 
 저장소 이름도 아주 간단하게 정했습니다. 
 저 같은 경우에는 Glacier라고 지었는데, 여러분은 최대 255자까지 임의로 정할 수 있습니다. 
 이름은 숫자, 문자 및 기호로 구성되는데, 공백이 있어서는 안 됩니다. 
 Next Step을 클릭하면 이벤트 알림 메시지를 발송할 것인지 선택할 수 있습니다. 
 예를 들어 백업 파일을 S3 Infrequent Access(SIA)로부터 Glacier로 이전하는 경우 작업이 완료되면 여러분에게 알림 메시지를 발송하는 기능입니다. 
 백업 파일을 클라우드로 이전하는 경우도 마찬가지입니다. 
 작업이 완료되면 저장소가 닫히고 알림 메시지가 발송됩니다. 
 내 정보를 검토하고 Submit을 클릭하면 내 저장소가 생성된 것입니다. 
 저장소로 할 수 있는 것들 중 몇 가지에 대해 알아보겠습니다. 
 저장소를 선택하면 이와 같이 몇 개의 탭이 나타납니다. 
 첫 번째는 Details입니다. 
 어떤 저장소인지, 언제 생성되었는지, 상주하는 리전은 어디인지에 대해 간단히 살펴볼 수 있습니다. 
 Notifications에서는 해당 Amazon SNS 주제로 돌아가 구독하거나 앞으로 알림을 수신할 수 있도록 설정할 수 있습니다. 
 아마도 가장 중요한 Permissions에서는 Glacier 저장소에 관한 정책서를 편집할 수 있습니다. 
 Vault Lock을 활성화할 수도 있습니다. 
 또한 여기에서 정책을 생성하고 편집하고 세부 정보를 열람할 수 있습니다. 
 이 관리 콘솔에서 설정 태그, 특히 데이터 검색 설정 태그를 생성할 수 있습니다. 
 내 환경에서 제한을 설정함으로써 검색 비용을 정하고 관리할 수 있습니다. 
 프리티어, 최대 검색 속도, 검색 무제한 중에 설정할 수 있습니다. 
 샌프란시스코에 소재한 Scribd는 2007년 이후 수백만 사용자가 문서를 웹에서 읽을 수 있는 형식으로 변환하고 이를 복수의 플랫폼에 걸쳐 공유할 수 있도록 지원해 오고 있습니다. 
 Amazon Glacier를 사용하여 데이터베이스 스냅샷을 저장하고, 이 스냅샷을 사용하여 필요할 경우 데이터베이스를 복원합니다. 
 로그 파일 또한 Glacier에 저장하는데, 대부분의 로그 파일은 자주 액세스하는 일이 없기 때문입니다. 
 Glacier를 사용하여 절감한 비용 덕분에 이제 전과 달리 보다 종합적인 백업을 구현할 수 있게 되었습니다. 
 스페인 바르셀로나에 소재한 Biblioteca de Catalunya 국립 도서관의 사례를 살펴보겠습니다. 
 이 도서관은 Glacier를 활용하여 오디오 및 비디오 파일과 같은 오래된 문서를 보관함으로써 드문드문 필요한 자료를 비용 효율적으로 저장할 수 있게 되었습니다. 
 누군가 그와 같은 자료를 필요로 할 경우에도 여전히 저비용으로 몇 분 혹은 몇 시간 이내에 이용할 수 있도록 할 수 있습니다. 
 이전에는 온프레미스 데이터 백업 솔루션을 사용했는데, Amazon Glacier로 전환하고 나서 백업 스토리지 비용을 약 75%나 절감했습니다. 
 핀란드에 기반을 둔 게임 개발 업체인 Supercell은 어땠을까요? 이 회사는 성공작으로 평가받는 Clash of Clans, Boom Beach, Clash Royale의 제작사입니다. 
 아마 여러분도 플레이해 본 적이 있을 것입니다. 
 이들 게임은 매일 수천만 명의 플레이어를 끌어모으며, 플레이어들은 매일 10TB가 넘는 게임 이벤트 데이터를 생성합니다. 
 Supercell은 Amazon Kinesis를 사용하여 이들 데이터를 실시간으로 분석하는데, 시간이 지나면 Amazon Glacier에 데이터를 저장합니다. 
 나중에 이벤트 데이터에 대한 보다 종합적이고 장기적인 분석이 필요해지는 경우에는 Glacier 저장소에서 데이터를 검색할 수 있습니다. 
 이 동영상 강의를 통해 여러분이 Amazon Glacier에 대해 조금이라도 배웠기를 바랍니다. 
 저는 AWS 교육 및 자격증 팀의 Adam Backer였습니다. 
 시청해 주셔서 감사합니다. 
 
 - Amazon Relational Database Service (RDS)
 Amazon RDS라고도 하는 Amazon Relational Database Service의 소개 강의에 오신 것을 환영합니다. 
 안녕하십니까? 저는 AWS 교육 및 자격증 팀의 Andy Cummings라고 합니다. 
 AWS에서 일한 지 1년 반 정도 됐으며, 지금은 북미 지역 고객을 대상으로 라이브 교육 이벤트를 제공하는 일을 담당하고 있습니다. 
 이 동영상 강좌에서는 Amazon RDS에 중점을 둘 것입니다. 
 여러분이 Amazon RDS의 주요한 이점을 잘 이해할 수 있도록 서비스에 대한 간략한 소개로 시작한 다음 Amazon RDS 의 개요와 사용 사례를 통해 좀 더 깊이 살펴본 후에 주요 이점을 요약하여 설명하는 것으로 마무리할까 합니다. 
 먼저 독립 관계형 데이터베이스의 운영에 따른 문제점을 살펴보겠습니다. 
 자체 관계형 데이터베이스를 운영하는 경우 서버 관리, 소프트웨어 설치 및 패치, 백업, 고가용성 보장, 규모 조정, 계획, 데이터 보안, OS 설치 및 패치와 같은 수많은 관리 업무를 감당해야 합니다. 
 이러한 작업 모두는 여러분의 할 일 목록에 있는 다른 작업에 필요한 리소스를 소비하고 일부는 전문성 또한 요구하는 것들입니다. 
 AWS는 자체 관계형 데이터베이스를 운영함에 따른 문제를 해결하기 위해 지속적으로 관리할 필요 없이 관계형 데이터베이스를 구축하고 운영하고 규모를 조정해 주는 서비스를 제공합니다. 
 Amazon RDS는 이전에 감당했던 것과 같은 시간이 많이 소요되는 관리 업무를 자동화해 주는 동시에 비용 효율적인 규모를 조정할 수 있는 서비스(용량)를 제공합니다. 
 Amazon RDS를 사용하면 시간적으로 여유가 생기기 때문에 애플리케이션의 성능, 가용성, 보안 및 호환성에 보다 집중할 수 있습니다. 
 다시 말해 데이터 및 애플리케이션의 최적화에 중점을 둘 수 있는 것입니다. 
 Amazon RDS는 운영 체제 설치 및 패치, 데이터베이스 소프트웨어 설치 및 패치, 자동 백업 및 고가용성 유지를 관리합니다. 
 리소스 규모 조정, 전력 및 서버 관리, 유지 관리 수행 또한 AWS에서 담당합니다. 
 이러한 작업들을 관리형 Amazon RDS 서비스로 이전하면 운영상 워크로드와 자체 관계형 데이터베이스와 관련된 비용을 줄일 수 있습니다. 
 이제 서비스를 간략하게 살펴보고 몇 가지 잠재적 사용 사례를 알아보겠습니다. 
 Amazon RDS의 기본 빌딩 블록은 데이터베이스 인스턴스입니다. 
 데이터베이스 인스턴스는 사용자가 만든 여러 데이터베이스가 포함될 수 있으며, 독립 실행형 데이터베이스 인스턴스에 사용하는 것과 동일한 도구 및 애플리케이션을 사용해 액세스할 수 있는 격리된 데이터베이스 환경입니다. 
 데이터베이스 인스턴스에 있는 리소스는 데이터베이스 인스턴스의 등급에 의해 결정되며, 스토리지의 유형은 디스크 유형에 의해 정해집니다. 
 데이터베이스 인스턴스 및 스토리지는 성능 특성과 가격이 다르므로 데이터베이스 요건에 따라 성능과 비용을 조정할 수 있습니다. 
 데이터베이스 인스턴스를 생성하려는 경우 먼저 실행할 데이터베이스 엔진을 지정해야 합니다. 
 Amazon RDS는 현재 MySQL, Amazon Aurora, Microsoft Sequel Server, PostgreSQL, MariaDB, Oracle 등 6개 데이터베이스를 지원합니다. 
 Amazon Virtual Private Cloud 또는 VPC 서비스를 사용하여 인스턴스를 실행할 수 있습니다. 
 Amazon VPC를 사용하면 가상 네트워킹 환경을 제어할 수 있습니다. 
 자체 IP 주소 범위를 선택하고, 서브넷을 생성하고, 라우팅 및 액세스 제어 목록을 구성할 수 있습니다. 
 Amazon RDS의 기본 기능은 Amazon VPC에서 실행되는지 여부와 상관없이 동일합니다. 
 통상적으로 데이터베이스 인스턴스는 프라이빗 서브넷에 격리되며 지정된 애플리케이션 인스턴스에 대해서만 직접 액세스가 가능합니다. 
 Amazon VPC에 있는 서브넷은 단일 가용 영역과 연결되므로, 서브넷을 선택하면 데이터베이스 인스턴스에 대한 가용 영역 혹은 물리적 장소까지 선택하는 셈입니다. 
 Amazon RDS의 가장 강력한 기능 중 하나는 다중 가용영역 배포로 높은 가용성을 구현하도록 데이터베이스 인스턴스를 구성할 수 있다는 점입니다. 
 일단 구성하고 나면 Amazon RDS가 데이터베이스 인스턴스의 예비 복사본을 동일한 Amazon VPC 내의 또 다른 가용 영역에 자동으로 생성합니다. 
 데이터베이스 복사본을 생성하고 나면 트랜잭션이 예비 복사본에 동시에 복제됩니다. 
 복수 가용 영역으로 데이터베이스 인스턴스를 실행하면 계획된 시스템 유지 관리 중 가용성을 향상시킬 수 있으며, 데이터베이스에 데이터베이스 인스턴스 오류 및 가용 영역 중단이 일어나는 것을 방지할 수 있습니다. 
 마스터 데이터베이스에 장애가 발생하면 Amazon RDS가 자동으로 예비 데이터베이스 인스턴스를 새 마스터로 가동시킵니다. 
 동시 복제 덕분에 데이터 손실이 발생하지 않습니다. 
 애플리케이션이 RDS DNS 엔드포인트를 사용하여 이름을 기준으로 데이터베이스를 참조하기 때문에 애플리케이션 코드의 어떤 것도 변경하지 않고 예비 복사본을 사용하여 장애 조치를 취할 수 있습니다. 
 또한 Amazon RDS는 MySQL, MariaDB, PostgreSQL 및 Amazon Aurora의 읽기 전용 복제본 생성을 지원합니다. 
 원본 데이터베이스 인스턴스에 적용된 변경 사항은 읽기 전용 복제본 인스턴스에도 동시에 적용됩니다. 
 애플리케이션에서 읽기 전용 복제본으로 읽기 쿼리를 라우팅하여 원본 데이터베이스 인스턴스의 로드에 대한 부하를 줄일 수 있습니다. 
 읽기 전용 복제본을 사용하면 읽기 중심 데이터베이스 워크로드를 처리하기 위해 단일 데이터베이스 인스턴스의 용량 제한을 확장할 수도 있습니다. 
 읽기 전용 복제본을 마스터 데이터베이스 인스턴스로 승격시킬 수도 있지만, 동시 복제 때문에 수작업이 필요합니다. 
 읽기 전용 복제본은 마스터 데이터베이스와 다른 리전에 생성할 수 있습니다. 
 이 기능은 재해 복구 요건을 충족하거나 읽기를 사용자와 더 가까운 읽기 전용 복제본으로 향하게 함으로써 지연 시간을 단축하는 데 유용할 수 있습니다. 
 Amazon RDS는 처리량이 많고, 스토리지 확장성이 뛰어나고 가용성이 높은 데이터베이스를 필요로 하는 웹 및 모바일 애플리케이션에 적합합니다. 
 Amazon RDS는 라이선싱 제약이 없기 때문에 웹 및 모바일 애플리케이션의 가변적 사용량 패턴에 완벽하게 들어맞습니다. 
 중소 규모 전자 상거래 업체의 경우 Amazon RDS는 온라인 판매 및 소매를 위한 유연성 및 보안성이 뛰어난 저비용의 데이터베이스 솔루션을 제공합니다. 
 모바일 및 온라인 게임은 처리량 및 가용성이 뛰어난 데이터베이스 플랫폼을 필요로 합니다. 
 Amazon RDS는 데이터베이스 인프라를 관리하기 때문에 게임 개발자가 데이터베이스 서버의 프로비저닝, 규모 조정 또는 모니터링을 걱정할 필요가 없습니다. 
 좋습니다. 
 그럼 Amazon RDS 사용에 따른 몇 가지 이점을 살펴보는 것으로 이 서비스에 대해 요약해 보도록 하겠습니다. 
 Amazon RDS는 가장 까다로운 데이터베이스 애플리케이션까지 지원합니다. 
 두 가지 SSD 기반 스토리지 옵션 중에서 선택할 수 있습니다. 
 하나는 고성능 OLTP 애플리케이션에, 다른 하나는 비용 효율적인 범용 애플리케이션에 최적화된 것입니다. 
 Amazon RDS를 사용하면 가동 중지 없이 데이터베이스 컴퓨팅 및 스토리지 리소스의 규모를 조정하고 AWS Management Console, Amazon RDS 명령줄 인터페이스 또는 단순한 API 호출을 사용하여 서비스를 관리할 수 있습니다. 
 Amazon RDS는 다른 Amazon Web Services에서 사용하는 것과 동일하게 안정성이 높은 인프라에서 실행됩니다. 
 또한 제어 및 보안 성능이 뛰어난 데이터베이스 인스턴스 및 Amazon VPC를 사용할 수 있습니다. 
 여러분이 배운 모든 AWS 서비스는 또 다른 솔루션 빌드 도구임을 명심하십시오. 
 여러분이 활용할 수 있는 도구가 많아질수록 여러분의 역량도 강해집니다. 
 지금까지 AWS 교육 및 자격증 팀의 Andy Cummings였습니다. 
 시청해 주셔서 감사합니다. 
 
 - Amazon Dynamo DB
 Amazon Dynamo DB 소개 과정에 오신 것을 환영합니다. 
 저는 Amazon Web Services(AWS)에서 솔루션스 아키텍트와 교육 및 자격증 담당 이사로 재직하고 있는 Rudy Valdez라고 합니다. 
 이 동영상 강좌에서는 Amazon DynamoDB 서비스를 소개하고 NoSQL 데이터 스토어를 위한 기능과 사용 사례를 살펴볼 예정입니다. 
 또한 Amazon DynamoDB 테이블과 새 항목을 생성하는 방법을 시연한 다음 쿼리 및 스캔 작업을 사용하여 데이터를 검색하는 방법을 알아볼 것입니다. 
 그럼 시작하겠습니다. 
 Amazon DynamoDB는 완전관리형 NoSQL 데이터베이스 서비스입니다. 
 Amazon은 이 서비스를 위한 모든 기반 데이터 인프라를 관리하고 내결함성 아키텍처의 일부로 미국 리전 내에 있는 여러 시설에 걸쳐 데이터를 다중 저장합니다. 
 Dynamo DB를 사용하여 테이블과 항목을 생성할 수 있습니다. 
 항목을 테이블에 추가할 수 있습니다. 
 표면 영역이 자동으로 데이터를 분할하고 테이블 스토리지를 갖추고 있어 워크로드 요건을 충족합니다. 
 테이블에 저장할 수 있는 항목의 수에 사실상 제한이 없습니다. 
 예를 들어 일부 고객의 경우 프로덕션 테이블에 수십억 개의 항목이 있습니다. 
 NoSQL 데이터베이스의 장점 중 하나는 동일한 테이블에 있는 항목의 속성이 다를 수 있다는 점입니다. 
 이 장점은 애플리케이션이 진화함에 따라 유연하게 속성을 추가할 수 있게 해 줍니다. 
 스키마 마이그레이션을 수행할 필요 없이 동일한 테이블 내에서 단계적으로 기존 형식 항목 대신 새 형식 항목을 저장할 수 있습니다. 
 애플리케이션이 점차 보편화되고 사용자가 지속적으로 상호 작용하다 보면 스토리지가 애플리케이션의 필요에 따라 커질 수 있습니다. 
 DynamoDB에 있는 모든 데이터는 SSD에 저장되며, 단순한 쿼리 언어 덕분에 지연 시간이 일관적으로 짧고 높은 쿼리 성능을 구현할 수 있습니다. 
 DynamoDB는 스토리지 규모를 조정할 수 있을 뿐만 아니라 테이블에 필요한 읽기 또는 쓰기 처리량을 프로비저닝할 수 있습니다. 
 애플리케이션 사용자 수가 증가함에 따라 DynamoDB 테이블을 확장하여 수작업 프로비저닝을 통해 증가된 읽기 및 쓰기 요청 건수를 처리할 수 있습니다. 
 혹은 Auto Scaling을 활성화하여 DynamoDB가 테이블에 대한 로드를 모니터링하다가 프로비저닝되는 처리량을 자동으로 늘리거나 줄이도록 할 수 있습니다. 
 스토리지 및 처리량 프로비저닝 측면에서 모두 테이블의 규모를 조정할 수 있는 기능 덕분에 Amazon DynamoDB는 웹, 모바일 및 사물 인터넷(IoT) 애플리케이션의 구조화 데이터에 적합합니다. 
 예를 들어 지속적으로 데이터를 생성하고 초당 다수의 요청을 하는 클라이언트가 많을 수 있습니다. 
 이러한 경우 DynamoDB의 처리량 규모 조정 기능 덕분에 클라이언트의 성능을 일관적으로 유지하는 것이 가능합니다. 
 DynamoDB는 지연 시간에 민감한 애플리케이션에도 사용됩니다. 
 대규모 테이블에서도 예측 가능한 쿼리 성능 덕분에 가변적인 지연 시간이 광고 기술이나 게이밍과 같이 사용자 경험 또는 사업 목표에 상당한 영향을 미칠 수 있는 경우에 유용합니다. 
 테이블 데이터는 기본 키에 의해 분할 및 색인화됩니다. 
 DynamoDB 테이블에서 데이터를 검색하는 방법은 두 가지입니다. 
 우선 쿼리 작업 시에 효과적으로 분할을 활용하고 기본 키를 사용하여 항목을 찾습니다. 
 두 번째 방법은 스캔을 통한 방법인데 키가 아닌 속성에 대한 조건을 대조함으로써 테이블에 있는 항목을 찾는 것입니다. 
 두 번째 방법을 사용하면 다른 속성으로 항목을 신축적으로 찾을 수 있습니다. 
 그러나 이 작업은 DynamoDB가 테이블에 있는 모든 항목을 검색하여 기준에 부합하는 것을 찾기 때문에 덜 효율적입니다. 
 쿼리 작업 및 Dynamo DB의 이점을 온전히 활용하려면 DynamoDB 테이블에서 항목을 고유하게 식별하는 데 사용한 키에 대해 생각해 봐야 합니다. 
 GUID 또는 기타 무작위 식별자와 같이 균등 분산을 통해 데이터 값의 단일 속성을 바탕으로 한 단일 기본 키를 설정할 수 있습니다. 
 예를 들어 제품으로 테이블을 모델링하는 경우 제품 ID와 같은 몇몇 속성을 사용할 수 있을 것입니다. 
 아니면 파티션 키와 이차 키로 구성되는 복합 키를 지정할 수도 있습니다. 
 이 예에서 책 테이블을 모델링해야 하는 경우에는 작가와 제목을 조합하여 테이블 항목을 고유하게 식별할 수 있을 것입니다. 
 이는 작가로 책을 검색하는 때가 많을 것이라 예상되는 경우에 유용할 수 있습니다. 
 그런 경우 쿼리를 사용할 수 있기 때문입니다. 
 이제 주제를 바꿔 새 DynamoDB 테이블 및 항목을 생성한 다음 쿼리 및 스캔 작업을 사용하여 데이터를 검색하는 시연을 해보겠습니다. 
 지금 저는 AWS Management Console의 Amazon DynamoDB 섹션에 있습니다. 
 상단을 보면 Oregon Region이 선택되어 있음을 알 수 있습니다. 
 이는 제가 생성하는 모든 테이블이 Oregon Region으로 배포된다는 뜻입니다. 
 계속하여 새 DynamoDB 테이블을 생성하겠습니다. 
 지정해야 할 첫 번째 파라미터는 테이블 이름입니다. 
 이 테이블에는 책에 관한 정보가 수록될 것이기 때문에 Books 테이블이라고 정하겠습니다. 
 다음으로 지정할 파라미터는 파티션 키입니다. 
 앞서 언급했듯이, DynamoDB는 파티션 키로 데이터를 분할하고 색인화할 것입니다. 
 여기에서 책 ID 같은 것을 사용할 수 있겠지만, 이 경우 작가를 기준으로 책을 검색할 일이 많을 것임을 알고 있습니다. 
 따라서 신속한 검색을 위해 기본 필드가 색인화되도록 하기 위해 이를 작가로 설정하고자 합니다. 
 하지만 실제로는 개별 작가가 제 테이블에 있는 책 중 둘 이상으로 집필했을 수 있으므로 작가는 제가 저장해야 하는 항목을 고유하게 식별하지 못할 것입니다. 
 저는 정렬 키를 추가함으로써 복합 키를 사용할 것입니다. 
 이제 작가와 제목을 조합함으로써 테이블에 있는 각각의 책을 고유하게 식별할 수 있습니다. 
 다음으로 정해야 할 사항은 Auto Scaling을 사용할 것인지 수동으로 처리량을 프로비저닝할 것인지 그리고 테이블에서 이차 색인을 정의해야 하는지 여부입니다. 
 이 시연에서는 기본 설정을 사용할 것입니다. 
 기본 설정을 사용하면 DynamoDB가 자동으로 테이블을 모니터링하여 그에 따라 읽기 및 쓰기 처리량을 설정합니다. 
 여기 신규 테이블이 있습니다. 
 테이블 이름은 Books이고, 기본 키는 Author, 검색 키는 Title입니다. 
 그럼 계속하여 상단 표시줄을 간략하게 살펴보겠습니다. 
 테이블에 있는 항목을 볼 수 있고, 제 지표를 확인하고 색인을 생성하고 테이블에서 다른 작업을 수행할 수도 있습니다. 
 계속하여 테이블에 있는 항목을 확인해 보겠습니다. 
 새 테이블이기 때문에 데이터가 없어 데이터 항목을 추가하겠습니다. 
 Create Item을 클릭하면 앞서 정한 기본 키를 바탕으로 시스템에 필요한 템플릿을 DynamoDB가 자동으로 채워 넣었음을 확인할 수 있습니다. 
 이 시연에서는 Author 및 Title을 사용할 수 있습니다. 
 이제 계속하여 H.G. Wells의 The Time Machine이라는 책을 추가할 것입니다. 
 다음으로 제가 할 수 있는 일은 이 항목에 추가 속성을 추가하는 것입니다. 
 다른 속성을 지닌 항목을 테이블에 포함할 수 있다는 유연성은 정말 유용합니다. 
 유연한 스키마 덕분에 개발자는 애플리케이션 요건이 변경되면 테이블 활용을 달리할 수 있습니다. 
 이제 오디오 또는 킨들 버전과 같이 이 책의 다양한 에디션을 추적할 수 있도록 Edition에 일단의 문자열을 추가할 것입니다. 
 마지막으로 이 마법사에서의 표시 형식을 트리에서 텍스트로 변경하는 것이 남았습니다. 
 저는 그것이 테이블에 있는 항목의 JSON 스타일 선언임을 알고 있습니다. 
 전체 크기가 DynamoDB 항목의 최대 크기인 400킬로바이트 이내일 때까지는 원하는 만큼 속성을 이 JSON 정의나 트리에 추가할 수 있습니다. 
 Save를 클릭하면 항목이 데이터 스토어에 커밋되고, 새 항목인 작가 H.G. Wells가 추가되었음을 알 수 있습니다. 
 제목은 The Time Machine입니다. 
 이 항목에 대한 에디션도 다양합니다. 
 잠시 동안 멈추고 좀 더 많은 데이터를 테이블에 로드하겠습니다. 
 됐습니다. 
 계속하여 테이블 새로 고침을 하면 더 많은 항목을 확인할 수 있습니다. 
 이제 Author, Title, Rating 및 Additions 항목이 있습니다. 
 참고로, 모든 항목의 속성 집합이 동일한 것은 아닙니다. 
 이는 DynamoDB의 유연성을 나타내는 것으로 각기 다른 속성과 각기 다른 항목을 가질 수 있게 해 줍니다. 
 하지만 이들 항목 중 하나는 Author와 Title이어야 하는데, 앞서 설명한 바와 같이 이들이 복합 키를 구성하기 때문입니다. 
 좋습니다. 
 테이블에서 책을 신속하게 찾으려면 쿼리 작업을 이용할 수 있습니다. 
 쿼리 작업을 수행할 때는 파티션 키의 값을 지정해야 합니다. 
 이 경우에서는 H.G. Wells가 지은 책을 찾을 것입니다. 
 선택적으로 키에 대한 필터 기준을 설정하고, 정렬 키 값을 기준으로 데이터를 내림차순으로 정렬할지 오름차순으로 정렬할지 선택할 수 있습니다. 
 Start search를 클릭하면 H.G. Wells에 대한 다수의 결과가 나타납니다. 
 여기에서는 네 종류의 책이 보입니다. 
 이 쿼리 작업에서는 제가 검색할 데이터의 파티션 키를 알고 있다는 사실을 십분 활용하고 있습니다. 
 데이터 대조가 아주 빠르게 이루어집니다. 
 반면에 제가 찾고자 하는 책의 작가를 모를 경우에는 어떻게 해야 하겠습니까? 테이블에 있는 각종 오디오 북을 모두 찾거나 키가 아닌 다른 속성에 대해 필터링하고자 한다면 어떻게 해야 합니까? 그와 같은 경우에는 스캔 작업을 활용할 수 있습니다. 
 예를 들어 계속 진행하여 Edition 속성에 “audible”이 포함된 책의 에디션을 찾으면 됩니다. 
 데이터세트에 있는 오디오 북을 모두 찾아야 합니다. 
 Start search를 클릭하면 여러 책이 반환됨을 확인할 수 있습니다. 
 혹은 추가 필터를 추가하여 점수가 3점보다 큰 항목만 표시되도록 할 수도 있습니다. 
 이렇게 하면 점수가 3점보다 높고 오디오 에디션인 네 가지 책이 반환됩니다. 
 지금까지 테이블을 생성하고, 데이터를 로드하고, 쿼리 및 스캔 작업을 둘 다 사용하여 데이터를 검색하는 기본 작업을 다루어 봤습니다. 
 요약하자면, Amazon DynamoDB는 확장을 통해 대량의 데이터를 저장하고 많은 요청 볼륨을 지원하고 짧은 지연 시간의 쿼리 성능을 요구하는 애플리케이션용 데이터 스토어로 활용 가능한 관리형 NoSQL 데이터베이스 서비스입니다. 
 지금까지 AWS 솔루션스 아키텍트와 AWS 교육 및 자격증을 담당하고 있는 Rudy Valdez였습니다. 
 시청해 주셔서 감사합니다. 
 

 

 


 - Amazon Redshift
 안녕하십니까? Amazon Redshift에 관한 Amazon Web Services 소개 과정에 오신 것을 환영합니다. 
 저는 Mark Fei라고 합니다. 
 AWS에 입사한 지 4년이 넘었으며 지금은 AWS 교육 및 자격증 팀에서 선임 기술 강사 역할을 맡고 있습니다. 
 AWS 교육 과정에서는 일반 수강생 및 소프트웨어 개발자를 대상으로 DevOps, 보안, 네트워킹, 빅 데이터, 데이터 웨어하우징, 분석, 인공 지능, 기계 학습 등 광범위한 관심 분야를 다룹니다. 
 이 과정에서는 Amazon Redshift를 소개하고, Amazon Redshift의 내용 및 기능을 간략하게 살펴보고, Redshift의 일반 사용 사례를 설명합니다. 
 실제 서비스를 확인할 수 있도록 짧은 시연으로 강의를 마무리합니다. 
 Amazon Redshift는 속도가 빠른 완전관리형 데이터 웨어하우스로서 표준 SQL 및 기존 비즈니스 인텔리전스 도구를 사용하여 모든 데이터를 간편하고 비용 효율적으로 분석할 수 있게 해 줍니다. 
 Redshift를 사용하면 정교한 쿼리 최적화, 고성능 로컬 디스크에 대한 열 형식 저장, 방대한 병렬 쿼리 실행을 통해 페타바이트 규모의 구조화 데이터를 대상으로 복잡한 분석 쿼리를 실행할 수 있습니다. 
 대부분의 결과가 수 초 이내에서 반환됩니다. 
 이제 Amazon Redshift의 핵심 기능과 일부 일반 사용 사례에 대해 좀 저 상세하게 살펴보겠습니다. 
 Redshift는 방대한 병렬 처리 아키텍처와 더불어 열 형식 스토리지 및 자동 압축 기능을 채택하여 페타바이트 규모의 데이터 세트에 대해서도 아주 빠른 쿼리 성능을 구현합니다. 
 거의 모든 AWS 서비스가 그렇듯이, 사용한 만큼에 대해서는 요금을 지불하면 됩니다. 
 시간당 25센트에서 테라바이트당 연간 천 달러(USD) 정도의 비용으로 Redshift의 스토리지 및 처리 서비스를 신축적으로 이용할 수 있습니다. 
 재차 말씀드리지만, 비용이 전통적인 데이터 웨어하우스 솔루션에 비해 1/10에 불과합니다. 
 Redshift Spectrum을 사용하면 엑사바이트 규모의 데이터에 대해 Amazon S3에서 직접 쿼리를 실행할 수 있습니다. 
 Redshift 클러스터의 관리, 모니터링, 규모 조정 등 대부분의 관리 업무가 상당히 간편하게 자동화되기 때문에 데이터 및 업무에 집중할 수 있습니다. 
 Redshift에는 확장성이 기본적으로 제공되기 때문에 필요에 따라 콘솔에서 몇 번의 조작만으로 클러스터를 확장하거나 축소할 수 있습니다. 
 항상 그렇지만, Amazon Web Services의 경우 보안이 가장 중요한 고려 사항입니다. 
 Redshift에는 저장 및 전송 중에 데이터를 암호화하는 강력한 기능이 내장되어 있습니다. 
 끝으로, Amazon Redshift는 이미 알려지고 사용되고 있는 도구와 호환되고, 표준 SQL을 지원하며 고성능 JDBC 및 ODBC 커넥터를 제공하기 때문에 원하는 SQL 클라이언트 및 비즈니스 인텔리전스 도구를 사용할 수 있습니다. 
 그러면 일반 사용 사례로 화제를 돌려 보겠습니다. 
 민첩성을 확보하려는 목적으로 전통적인 엔터프라이즈 데이터 웨어하우스에서 Amazon Redshift로 마이그레이션하는 고객이 많습니다. 
 고객은 원하는 규모로 시작하고 IT 부서가 하드웨어 및 소프트웨어를 조달하여 프로비저닝할 필요 없이 기존의 데이터로 실험해 볼 수 있습니다. 
 빅 데이터 고객은 한 가지 공통점을 가지고 있는데, 기존 시스템에 흩어져 있는 방대한 양의 데이터가 한계점에 도달하고 있다는 사실입니다. 
 규모가 작은 고객은 일반적으로 이들 시스템을 실행할 만큼 충분한 하드웨어 및 전문 인력을 조달할 자금을 갖고 있지 않습니다. 
 Amazon Redshift를 사용하면 상대적으로 낮은 비용으로 데이터 웨어하우스를 시작하고 운영할 수 있습니다. 
 관리형 서비스인 Amazon Redshift는 데이터베이스 관리자가 필요한 경우가 많은 배포 및 지속적인 관리 업무 중 다수를 대신 수행하기 때문에 IT 부서가 쿼리 및 분석에 집중할 수 있게 해 줍니다. 
 Software-as-a-Service(SaaS) 고객은 Amazon Redshift가 제공하는 확장 가능하고 관리하기 쉬운 플랫폼에 매력을 느낍니다. 
 일부는 애플리케이션에 분석 기능을 제공하는 플랫폼을 사용합니다. 
 일부는 고객당 클러스터 하나를 배포하고 태깅을 사용하여 서비스 수준 계약(SLA) 및 과금 관리를 단순화하기도 합니다. 
 이제 얼마나 쉽게 시작하고 데이터를 로드하고 쿼리를 실행할 수 있는지 확인할 수 있도록 간단한 시연을 해 보겠습니다. 
 그럼 시작하겠습니다. 
 이 Amazon Redshift 시연을 위해 미리 AWS Management Console에 로그인하여 Redshift 대시보드에 있습니다. 
 Redshift 클러스터를 시작하는 게 얼마나 쉬운지 보여 드리기 위해 화면 두 개를 살펴보고자 합니다. 
 여기서는 클러스터 이름을 제공해야 합니다. 
 데이터베이스 이름과 포트 이름은 기본적으로 제공되는 것을 사용할 수 있습니다. 
 마스터 데이터베이스 사용자와 적절한 암호를 정해야 합니다. 
 그런 다음 클러스터의 크기와 유형을 지정할 수 있습니다. 
 선택할 노드 유형이 다양합니다. 
 단일 노드 클러스터를 선택할 수 있는데, 간단한 개발 및 실험에 적합합니다. 
 프로덕션 목적으로는 데이터 복제를 위한 노드가 두 개 이상인 다중 노드가 필요할 것입니다. 
 지금 보고 있는 것이 그것입니다. 
 그런 다음 네트워킹과 관련하여 몇 가지 선택을 합니다. 
 네트워킹 및 보안 그룹을 선택하고 나면, 마지막으로 Redshift 클러스터가 적절한 액세스 권한을 가지도록 서비스 역할을 선택합니다. 
 이 경우는 Amazon S3입니다. 
 Continue 버튼을 클릭하면 우리가 선택한 것을 요약해서 보여 주는 화면이 나타납니다. 
 시작할 준비가 되면 Launch Cluster 버튼을 클릭하면 됩니다. 
 일반적으로 Redshift 클러스터는 어디서든 시작하는 데 몇 분 정도 소요됩니다. 
 작은 클러스터의 경우에는 5분에서 6분, 큰 클러스터의 경우에는 10분에서 15분 정도 걸립니다. 
 여기에서는 시연을 바로 시작할 수 있도록 미리 클러스터를 시작해 놓았습니다. 
 보시다시피 첫 클러스터가 시작되어 가동 중입니다. 
 대시보드에서 클러스터를 클릭하면 이용 가능한 풍부한 정보가 표시됩니다. 
 제가 필요로 하는 정보의 첫 부분이 여기에 있습니다. 
 바로 여기 원으로 표시한 부분입니다. 
 이것이 제 JDBC URL입니다. 
 그럼 그걸 선택하여 제 클립보드로 복사하겠습니다. 
 제 SQL 클라이언트를 구성하여 연결할 때 곧 필요할 것입니다. 
 이제 SQL 워크벤치 J를 시작했습니다. 
 물론 임의의 SQL 클라이언트처럼 빨리 할 수도 있었습니다. 
 이것은 어쩌다가 제 노트북 컴퓨터에 설치한 것입니다. 
 좀 전에 봤던 콘솔 화면에서 복사한 해당 URL을 여기 연결 창에 미리 입력해 놓았습니다. 
 이렇게 하면 데이터베이스로 연결됩니다. 
 이제 몇 가지 SQL 명령을 실행해 보겠습니다. 
 웹 사이트에 있는 자가 시연에서 발췌한 명령 집합을 사용할 것입니다. 
 이 명령들을 실행하면 다양한 쇼와 이벤트의 티켓 판매에 관한 정보가 담긴 데이터세트에 대한 일련의 테이블 정의가 생성됩니다. 
 이것이 Create Table 문입니다. 
 계속해서 이 명령문들을 실행하면 7개 Create Table 문이 모두 성공적으로 실행됨을 확인할 수 있습니다. 
 다음으로 할 일은 공개적으로 이용 가능한 데이터를 Amazon S3에 상주하는 데이터세트로부터 테이블에 복사하게 될 일련의 복사 명령을 채용하는 것입니다. 
 제 Redshift 역할을 자격 증명으로 대체하는 간단한 작업을 수행해야 합니다. 
 작업을 완료하면 이제 이들 복사 명령을 실행할 수 있습니다. 
 Redshift에서 복사 명령은 사실상 load 명령입니다. 
 그렇기 때문에 복사 명령은 상당히 신속하게 실행됩니다. 
 15초도 안 되어 모든 데이터를 로드할 수 있음을 알 수 있습니다. 
 시연의 마지막 단계는 데이터세트에 대한 쿼리를 실제로 수행하는 것입니다. 
 그러면 쿼리 집합을 살펴보겠습니다. 
 첫 번째 쿼리는 PG Table Def라고 하는 시스템 테이블에 대한 쿼리로 테이블 정의를 표시합니다. 
 그런 다음 특정 날짜의 총 매출을 검색할 것입니다. 
 2008년 1월 5일입니다. 
 수량 기준 10대 구매자를 찾은 후에, 역대 총 매출 기준으로 99.9번째 백분위수에 해당하는 이벤트를 찾을 것입니다. 
 그럼 계속하여 해당 쿼리를 실행하겠습니다. 
 이제 여기 하단에서 결과를 확인할 수 있습니다. 
 결과 1은 매출 테이블에 대한 첫 번째 쿼리 정의로부터 반환된 것입니다. 
 결과 2인 210은 특정 날짜에 판매된 모든 티켓의 총합입니다. 
 결과 3은 수량 기준 10대 구매자이고, 결과 4는 역대 총 매출 기준으로 99.9번째 백분위수에 해당하는 이벤트입니다. 
 물론 Phantom of the Opera와 같은 인기 오페라도 있습니다. 
 요약하자면, Amazon Redshift는 처리 속도가 빠른 완전관리형 데이터 웨어하우스 서비스입니다. 
 여러분이 조금이나마 배웠고 다른 강좌도 계속 학습하시기를 바랍니다. 
 지금까지 AWS 교육 및 자격증 팀의 Mark Fei였습니다. 
 시청해 주셔서 감사합니다. 
 


 - Amazon Aurora
 안녕하십니까? 저는 AWS 교육 및 자격증 팀의 Kirsten Dupart라고 합니다. 
 Amazon Aurora 소개 과정에 오신 것을 환영합니다. 
 저는 Amazon에 몸담은 지 약 일 년 반이 되었으며, 지금은 교육 및 자격증 팀에서 교과 과정 개발을 담당하고 있습니다. 
 Amazon Aurora와 일부 서비스, 주요 이점 및 개념에 대해 간략하게 살펴보는 것으로 강좌를 시작하겠습니다. 
 그런 다음 간단한 시연을 통해 AWS 콘솔에서 Amazon Aurora 데이터베이스를 설정하는 방법을 설명할 것입니다. 
 유명 회사의 사용 사례와 Amazon Aurora 사용에 따른 이점을 요약하여 설명할 것입니다. 
 Amazon Aurora는 고성능 상용 데이터베이스의 속도와 가용성에 오픈 소스 데이터베이스의 간편성과 비용 효율성을 결합한 MySQL 관계형 데이터베이스 엔진입니다. 
 그럼 Amazon Aurora를 간략하게 살펴본 다음 이 서비스의 핵심 개념과 기능 중 일부를 심층적으로 다루어 보겠습니다. 
 먼저 Amazon Aurora의 장점 중 몇 가지에 대해 알아보겠습니다. 
 빠릅니다. 
 가용성이 뛰어납니다. 
 MySQL보다 다섯 배 더 뛰어난 성능을 발휘하며 단지 몇 번의 마우스 클릭으로 높은 가용성을 구현합니다. 
 Amazon Aurora는 설정이 간단하며 여러분이 아마 이미 익숙한 SQL 쿼리를 사용합니다. 
 InnoDB 스토리지 엔진을 사용하기 때문에 MySQL 5.6과도 즉시 호환성을 갖추고 있습니다. 
 Amazon Aurora는 종량제 서비스로서 실제로 사용하는 서비스와 기능에 대해서만 요금을 지불하면 됩니다. 
 끝으로 Amazon Aurora는 관리형 서비스입니다. 
 AWS Database Migration Service 및 AWS Schema Conversion Tool과 같은 기능과 통합되어 데이터 세트를 Amazon Aurora로 원활하고 민첩하게 이전하는 데 유용합니다. 
 마지막 이점에 대해서는 잠시 시간을 내어 좀 더 심층적으로 다루어 보겠습니다. 
 방금 전에 Amazon Aurora가 관리형 서비스라고 설명했는데, 그렇다면 그것의 정확한 의미와 왜 그렇게 중요할까요? 전통적인 온프레미스 데이터베이스의 경우 데이터베이스 관리자가 앱 및 쿼리 최적화에서 하드웨어 구성, 패치, 네트워킹 설정, 전력 설정 및 HVAC까지 모든 것을 담당합니다. 
 하지만 Amazon EC2 인스턴스에서 가동되는 데이터베이스로 전환할 경우 기반 하드웨어를 관리하거나 데이터 센터 운영에 신경을 쓸 필요가 더 이상 없습니다. 
 물론 운영 체제 패치와 전반적인 소프트웨어 및 백업 운영 업무는 여전히 맡아야 할 것입니다. 
 데이터베이스를 Amazon Relational Database Service(Amazon RDS) 또는 Amazon Aurora에 구축할 경우 과중한 업무에서 해방될 수 있습니다. 
 클라우드로 전환하면 데이터베이스 규모 조정, 높은 가용성 확보, 백업 관리 및 패치 작업이 자동으로 수행되기 때문에 정말 중요한 사안인 애플리케이션 최적화에 집중할 수 있습니다. 
 그런데 Amazon RDS와 함께 MySQL를 사용하는 대신 Amazon Aurora를 사용하는 이유는 무엇일까요? 그와 같은 결정은 대개 Amazon이 제공하는 높은 가용성과 탄력적인 설계에서 기인합니다. 
 Amazon Aurora는 가용성이 뛰어나 데이터의 복사본 6개를 세 곳의 가용 영역에 걸쳐 저장하며 Amazon S3에 대한 지속적인 백업을 수행합니다. 
 최대 15개의 읽기 전용 복제본을 사용할 수 있어 데이터 손실 위험을 해소합니다. 
 게다가 Amazon Aurora는 기본 데이터베이스의 상태에 문제가 발생한 경우 즉각적인 장애 복구가 가능하도록 설계되어 있습니다. 
 다른 데이터베이스와 달리 Amazon Aurora는 데이터베이스 장애 발생 후에 최근 데이터베이스 체크포인트로부터 Redo 로그를 재현할 필요가 없습니다. 
 대신 읽기 작업 시마다 이를 수행합니다. 
 그렇기 때문에 대부분의 경우 데이터베이스 장애 발생 후의 데이터베이스 재시작 시간이 60초 이하로 줄어듭니다. 
 Amazon Aurora는 데이터베이스 프로세스에서 버퍼 캐시를 분리하여 재시작 즉시 사용할 수 있도록 합니다. 
 이렇게 하면 캐시가 다시 채워질 때까지는 액세스를 제한할 필요가 없어 중단이 방지됩니다. 
 이제 잠시 화제를 전환하여 실제 Amazon Aurora를 살펴보겠습니다. 
 지금 보고 있는 것은 Amazon RDS 콘솔입니다. 
 Aurora 인스턴스를 시작하겠습니다. 
 인스턴스로 가서 새 데이터베이스 인스턴스를 시작합니다. 
 RDS의 경우 언제나 데이터베이스 엔진을 선택할 수 있습니다. 
 여기서는 Amazon Aurora를 선택할 것입니다. 
 MySQL 또는 Postgres와의 호환성을 원하는 경우 그렇게 선택할 수 있습니다. 
 다음 페이지에서는 데이터베이스 인스턴스의 크기와 필요한 CPU 및 RAM의 용량을 선택해야 합니다. 
 지금은 기본 설정을 사용할 것입니다. 
 다른 가용 영역에 있는 복제본을 사용하여 배포할 수도 있습니다. 
 그 다음에는 기본 데이터베이스 설정입니다. 
 데이터베이스 이름을 정합니다. 
 저는 “prod”라고 하겠습니다. 
 그런 다음 마스터 사용자 이름과 암호를 지정합니다. 
 간단하게 정하겠습니다. 
 다음 페이지에서는 데이터베이스의 위치를 선택하고 이어서 VPC와 VPC 내의 서브넷을 정할 것입니다. 
 복수의 가용 영역 기능을 선택할 수 있기 때문에 서브넷 또한 복수가 가능합니다. 
 여기에서 공개 접근을 허용할 필요가 있는지 확인할 것입니다. 
 아마 아닐 것입니다. 
 어떤 가용 영역에 위치하는지 알고 있을 경우 지정하거나 설정하지 않고 그대로 둘 수도 있습니다. 
 그런 다음 신규 혹은 기존 보안 그룹(SG)을 생성하여 데이터베이스에 적용할 수 있습니다. 
 이렇게 하면 데이터베이스에 연결할 때 가용한 포트의 범위가 제한됩니다. 
 현재 보안 그룹이 없기 때문에 신규 보안 그룹을 생성할 것입니다. 
 원할 경우 클러스터 ID와 내 데이터베이스 서버에 있는 데이터베이스 이름을 지정할 수 있습니다. 
 데이터베이스 이름을 “Customers”라고 정하고 몇 가지 기본 데이터베이스 수준 설정을 수행합니다. 
 예를 들어 연결할 포트, 데이터베이스 설정에 적용할 파라미터를 정합니다. 
 이 데이터베이스에는 SSL 연결만 가능하도록 하고 싶다면 여기에서 설정할 수 있습니다. 
 키 관리 서비스로부터 추출된 키를 사용하여 데이터베이스를 암호화할 수 있습니다. 
 장애 조치, 모니터링 옵션 등에 관한 기본 설정도 지정할 수 있습니다. 
 Amazon RDS는 데이터베이스를 자동으로 백업합니다. 
 얼마나 오랫동안 백업을 유지할 것인지 선택할 수 있습니다. 
 백업 보존 기간은 1일에서 35일까지입니다. 
 저는 일주일을 선택하겠습니다. 
 끝으로, RDS가 새 마이너 데이터베이스 업그레이드를 자동으로 수행하도록 할 것인지 여부를 선택합니다. 
 자동 수행을 원하는 경우 자동 수행 시간을 지정할 수 있습니다. 
 저는 일요일 오전 2시를 선택할 것입니다. 
아주 간단합니다. 
 잠시 후에 버튼을 클릭하여 데이터베이스 인스턴스를 시작하면 새 데이터베이스가 생성되어 가동을 시작할 것입니다. 
 Amazon RDS가 연결 문자열을 표시하게 되고 다른 데이터베이스와 마찬가지로 어디서든 연결할 수 있습니다. 
 이 동영상 강좌를 마무리하기 전에 한 유명 회사가 Amazon Aurora를 어떻게 활용하고 있는지를 설명하는 사용 사례를 간략하게 소개하고자 합니다. 
 Expedia는 이전에 전통적인 데이터베이스와 관련한 문제 때문에 고민했습니다. 
 수백 개의 노드로 구성되어 있으며 비용이 터무니 없이 비싸고 확장성도 없는 값비싼 대형 시스템을 사용하고 있었습니다. 
 Expedia는 Aurora로 전환함으로써 성능 저하 없이 데이터베이스를 확장할 수 있게 되었습니다. 
 물론 경제적인 운영 비용 또한 큰 장점이었습니다. 
 Expedia는 평균적으로 초당 약 25,000개의 인스턴스를 실행했으며 피크 시에는 70,000개에 달했습니다. 
 이렇게 많은 인서트를 실행하는 동안 Expedia는 쓰기 작업의 경우 30밀리초의 평균 응답 시간, 읽기 작업의 경우 17밀리초의 응답 시간을 경험할 수 있었는데, 이 모든 것이 한 달간의 데이터를 처리하면서 이루어진 것입니다. 
 요약하자면, Aurora는 가용성이 뛰어나고, 설정이 쉽고, 성능이 우수하며, 비용 효율적인, 관리되는 관계형 데이터베이스입니다. 
 여러분이 오늘 조금이나마 배웠고 이 강좌와 다른 AWS 서비스 강좌를 계속 학습하시기를 바랍니다. 
 지금까지 AWS 교육 및 자격증 팀의 Kirsten Dupart였습니다. 
 시청해 주셔서 감사합니다. 
 
 - AWS Trusted Advisor
 안녕하세요. 
 이번 시간은 AWS Trusted Advisor에 대한 짧은 소개 동영상으로 시작하겠습니다. 
 저는 Tipu Qureshi이며 약 6년 동안 Amazon Web Services에서 근무했습니다. 
 저는 AWS Support 팀의 일원으로서 고객과 함께 협력하여 고객 경험을 향상시키는 방법을 연구하는 업무를 담당하고 있습니다. 
 이 동영상에는 AWS Support 팀의 일원인 Alex Buell도 등장합니다. 
 그는 Trusted Advisor 팀의 소프트웨어 개발 엔지니어로 일하고 있습니다. 
 이 동영상에서는 Trusted Advisor에 대해 살펴보고 관련 사례 연구를 검토하여 컨텍스트를 제공합니다. 
 그런 다음, 해당 서비스의 작동 원리에 대해 좀 더 자세히 알아보고 Alex가 시연할 빠른 데모로 마무리하고자 합니다. 
 또한 AWS Trusted Advisor를 사용하여 보안, 내결함성, 성능 및 비용 절감을 향상시킬 수 있는 방법을 구체적으로 살펴보겠습니다. 
 AWS 여정을 시작하면 리소스를 추적할 수는 있지만 그만큼 사용자의 요구가 신속하게 증가합니다. 
 사용자의 요구가 증가할수록 AWS 계정은 추적할 수 없을 정도로 너무 많은 리소스를 보유할 수 있습니다. 
 분리된 리소스, 비용 면에서 최적화되지 않고 오히려 낭비되고 있는 리소스(예: 인스턴스에 연결되지 않은 EIP), 사용되지 않고 단지 돈을 낭비하는 볼륨 또는 스냅샷 등이 존재할 수 있습니다. 
 또한 내결함성, 성능 또는 보안을 위해 최적화되지 않은 리소스를 가질 수도 있습니다. 
 이 모든 것은 중요하지만 복잡성을 추적하기는 어렵습니다. 
 그래서 여기에 이러한 리소스들이 증가하는 것이며 리소스가 증가할수록 이를 추적하기 위해 뭔가 필요합니다. 
 바로 여기에서 Trusted Advisor가 제공됩니다. 
 Trusted Advisor는 모범 사례를 제공하고 계정의 모든 리소스를 검사하여 각 리소스가 모범 사례에 부합하는지 확인하는 도구입니다. 
 Trusted Advisor는 이러한 작업을 보안, 내결함성, 성능 및 비용 최적화라는 4가지 범주로 나누어 수행합니다. 
 이것은 Trusted Advisor 콘솔의 대시보드에 속합니다. 
 (나중에 Alex가 데모를 보여줄 것입니다. ) 이는 올바른 방식으로 작업을 수행했다면 지금 당장 많은 비용을 절약할 수 있다는 것을 알기 쉽게 보여줍니다. 
 그리고 세 가지 범주의 검사도 진행되는데 빨간색은 즉각적인 조치가 필요함을 나타내고 노란색은 조사가 필요함을 나타내며 녹색은 여러분이 AWS를 잘 사용하고 있으며 모든 설정이 완료되었음을 나타냅니다. 
 현재까지 Trusted Advisor는 고객들에게 5억 달러 이상의 비용 절감 효과를 제공했으며 1,500만 건이 넘는 권장 사항을 제공했습니다. 
 지금까지 Trusted Advisor에 대해 좀 더 자세한 내용을 알아보았습니다. 
 이제 간단한 고객 사례 연구를 통해 과거에 이 서비스가 어떻게 사용되었는지를 보여주는 구체적인 한 가지 예를 살펴보기로 하겠습니다. 
 Hungama는 매월 23% 이상의 비용을 실제로 절감한 AWS 고객사 중 한 업체입니다. 
 Hungama는 여러 가지 검사를 사용했으며 특히 Underutilized EC2 인스턴스 검사(Underutilized EC2 Instance Check)를 사용했는데 이 검사에서 Hungama의 일부 개발 팀들은 인스턴스 크기 측면에서 과다 프로비저닝되었습니다. 
 그들은 인스턴스의 크기를 적정하게 조정하고 사용되지 않은 인스턴스로 낭비를 제거해야 했습니다. 
 Underutilized Amazon EC2 인스턴스 검사 외에도 Amazon EC2 예약 인스턴스 및 Underutilized Amazon EBS 볼륨 검사를 사용하여 최적의 방법으로 리소스를 사용하고 비용을 절감하고 있는지 확인했습니다. 
 그렇다면 Trusted Advisor는 과연 어떻게 작동할까요? Trusted Advisor는 계정 리소스를 기존 모범 사례와 비교하여 검사 형식으로 데이터를 전송합니다. 
 이제 Trusted Advisor는 이러한 모범 사례를 콘솔 형식으로 나타낼 뿐만 아니라 API도 갖습니다. 
 이외에도 특정 검사가 실패할 때 이에 대한 조치를 취할 수 있도록 해당 검사에 대한 알림을 수신할 수 있습니다. 
 또한 자동화를 도입할 수도 있는데 그 이유는 Trusted Advisor가 AWS Lambda와 같은 서비스를 사용할 수 있는 Amazon CloudWatch Events와 통합되어 자동 작업을 수행하고 리소스 최적화를 자동화할 수 있기 때문입니다. 
 이제 Alex와 함께 데모를 시연해보기로 하겠습니다. 
 감사합니다. 
 제 이름은 Alex입니다. 
 저는 AWS의 Trusted Advisor 팀과 3년간 함께 일했습니다. 
 다음은 Trusted Advisor 제품에 대한 AWS 콘솔 경험의 개요 및 프레젠테이션입니다. 
 기본 시작 페이지는 전체적인 대시보드입니다. 
 Tipu가 언급했듯이 보안, 비용 최적화 및 성능을 비롯한 다양한 유형의 검사에 대한 분석을 볼 수 있습니다. 
 또한 전체 검사 상태의 최근 변경 사항을 중점적으로 보여주는 섹션도 있습니다. 
 그리고 발표된 새로운 검사 및 변경 사항을 보다 분명하게 보여줄 공지 사항도 제공했습니다. 
 이제 특정 검사를 자세히 알아보기로 하겠습니다. 
 Service Limits 검사는 고객이 AWS에서 다양한 서비스에 대한 사용량 대 실제 사용 한도를 비교하여 볼 수 있기 때문에 많은 고객에게 매우 유용합니다. 
 이는 리전별로 세분되기 때문에 고객은 서비스 한도에 근접할수록 그 한도를 늘리기 위한 사전 요청을 할 수 있습니다. 
 이는 특정 서비스에 대한 한도에 근접할 때 알림을 전달하기 때문에 AWS 고객에게 매우 유용합니다. 
 이 검사를 활용하면 제품 및 고객에게 예정된 신제품 출시로 인해 서비스가 중단될 가능성을 방지할 수 있습니다. 
 또 다른 공통 범주로는 보안 범주가 있습니다. 
 이 범주는 (자주 변경 또는 교체되지 않은 이전 IAM 키에서 발생하는) IAM 관련 문제에서부터 최신 보안 문제(예: 리소스에 대한 의도하지 않은 액세스)에 이르기까지 다양한 알림을 제공합니다. 
 예를 들면 AWS 고객과 관련된 최근 사례에서 고객들은 귀하와 귀사가 소유한 Amazon EBS, Amazon S3 또는 Amazon RDS 인스턴스에 액세스할 수 있었습니다. 
 이들 인스턴스는 최근 추가된 예에 속하며 새로 고침은 주기적으로 자동 실행됩니다. 
 이렇게 하면 수동으로 새 업데이트를 얻기 위한 새로 고침을 굳이 시도할 필요 없이 더 많은 사전 알림을 수신할 수 있습니다. 
 내결함성 범주에는 서비스 팀의 지원을 받을 필요 없이 직접 조치를 취함으로써 도움을 받을 수 있는 잠재적 지원 사례와 관련된 다양한 검사가 있습니다. 
 서비스 중단을 피하기 위한 최근의 AWS Direct Connect 모범 사례는 물론, 보안 또는 성능 요구에 대해 권장되는 새 버전이 있을 때 업데이트를 실행하는 문제의 리소스(예: EC2 Config를 포함한 EC2 Windows)를 관리하고 구성하기 위한 그 밖의 경고 및 알림이 바로 이러한 검사의 예에 속합니다. 
 이메일 연락처를 설정하여 모든 검사에서 사용자의 계정 상태를 요약한 알림을 매주 수신할 수 있는 기본 설정 페이지가 있습니다. 
 또한 모든 검사 또는 특정 검사에 대한 보고서를 다운로드할 수 있는 기능들도 있습니다. 
 이들 기능은 CSV 또는 Excel 형식의 파일에 있습니다. 
 데이터를 저장하는 방법과 그러한 데이터를 사용해 직접 수행할 작업을 로컬에서 선택할 수 있습니다. 
 마지막으로 키 호출은 새로 고침(refresh) 기반의 기능에 해당됩니다. 
 대부분의 검사는 새 데이터를 가져와 모든 리소스의 현재 상태에 대한 업데이트를 수신할 수 있는 새로 고침 표시기가 있습니다. 
 바로 옆에 있는 타임라인 버튼은 전체 검사에 대한 새로 고침이 실행될 때까지의 경과 시간을 대략적으로 보여줍니다. 
 이것은 이 데이터의 관련성을 나타내는 지표에 속합니다. 
 새로 고침 버튼을 클릭하면 Trusted Advisor가 시작되며 사용 중인 계정의 모든 리소스에 대한 관련 데이터를 모두 가져와 다시 제공합니다. 
 준비가 되면 새로 고침 상태가 업데이트됩니다. 
 CloudWatch 이벤트 규칙이 새로운 기능으로 추가되었습니다. 
 이것은 Trusted Advisor가 실제로 새로 고침을 처리할 때마다 자동으로 수신 대기하도록 설정된 규칙의 한 예에 속합니다. 
 이 경우, Lambda 함수나 그 밖의 일부 활동을 설정하여 특정 검사 및 특정한 전체 상태 변경 사항과 관련해 알림을 제공하거나 조치를 취할 수 있습니다. 
 심지어는 특정 리소스에 대한 특정 규칙을 설정하고 이를 자신의 조직과 관련시킬 수도 있습니다. 
 마지막으로 고객이 보유한 리소스와 관련된 특정 태그를 입력할 수 있는 태깅 필터 기능이 있으며 해당 태그의 존재 여부와 상관없이 모든 관련 검사 결과가 필터링됩니다. 
 지금까지 AWS Console Trusted Advisor 경험에 대한 간략한 개요와 함께 그 특징 및 기능을 일부 살펴보았습니다. 
 이제 마이크를 Tipu에게 넘기겠습니다. 
 고마워요, Alex. 
 요약하자면 Trusted Advisor는 비용을 최적화하고 성능을 향상시키며 내결함성을 높이면서 보안을 구현하는 데 도움을 줄 수 있습니다. 
 이번 시간에 뭔가 조금이라도 배운 후에 다른 동영상을 계속 살펴보기 바랍니다. 
 이상 AWS Support 팀의 Tipu Qureshi와 Alex Buell의 설명을 마칩니다. 
 시청해 주셔서 감사합니다. 

 

반응형


반응형

Developer Associate



Recommended Path to Prepare for the AWS Certified Developer - Associate Exam


English - https://aws.amazon.com/certification/certification-prep/?nc1=h_ls 

Korean - https://aws.amazon.com/ko/certification/certification-prep/?nc1=h_ls



AWS Certification Frequently Asked Questions (FAQs)


Korean : https://aws.amazon.com/ko/certification/faqs/




AWS Certified Developer - Associate Level Exam Blueprint 

 

http://awstrainingandcertification.s3.amazonaws.com/production/AWS_certified_developer_associate_blueprint.pdf








학습 팁: 다음 백서를 위주로 살펴보십시오.

클라우드에 적합한 아키텍처 설계: AWS 모범 사례 || AWS 보안 모범 사례 || Amazon Web Services: 보안 프로세스의 개요|| AWS Well-Architected Framework || AWS 기반 개발 및 테스트 || AWS를 사용한 백업 및 복구 접근 방식 || Amazon Virtual Private Cloud 연결 옵션 || AWS 요금제 적용 방식

모든 백서 보기


학습 : 다음 FAQ 위주로 살펴보십시오.

Amazon EC2 || Amazon S3 || Amazon VPC || Amazon Route 53 || Amazon RDS || Amazon SQS

모든 FAQ 보기



http://free-braindumps.com/amazon/free-aws-certified-developer-associate-braindumps.html?p=2


Register free membership : http://free-braindumps.com/login.html?ReturnURL=/amazon/free-aws-certified-developer-associate-braindumps.html 


QUESTION: 
A user is running a MySQL RDS instance. The user wil not use the DB for the next 3 months. 
How can the user save costs? 

A. Pause the RDS activities from CLI until it is required in the future 
B. Stop the RDS instance 
C. Create a snapshot of RDS to launch in the future and terminate the instance now 
D. Change the instance size to micro 

Answer(s): 
Explanation: 
The RDS instances unlike the AWS EBS backed instances cannot be stopped or paused. The 
user needs to take the final snapshot, terminate the instance and launch a new instance in the 
future from that snapshot. 
Reference: 
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.BackingUpAndRestoring
AmazonR DSInstances.html 


QUESTION: 
In DynamoDB, if you create a table and request 10 units of write capacity and 200 units of read 
capacity of provisioned throughput, how much would you be charged in US East (Northern 
Virginia) Region? 

A. $0.05 per hour 
B. $0.10 per hour 
C. $0.03 per hour 
D. $0.15 per hour 

Answer(s): 
Explanation: 
To understand pricing in DynamoDB, consider the following example. If you create a table and 
request 10 units of write capacity and 200 units of read capacity of provisioned throughput, you 
would be charged: 
$0.01 + (4 x $0.01) = $0.05 per hour 
Reference: 
http://aws.amazon.com/dynamodb/pricing/ 


QUESTION: 
You have been doing a lot of testing of your VPC Network by deliberately failing EC2 instances 
to test whether instances are failing over properly. Your customer who wil be paying the AWS 
bil for all this asks you if he being charged for all these instances. You try to explain to him how 
the bil ing works on EC2 instances to the best of your knowledge. What would be an appropriate 
response to give to the customer in regards to this? 

A. Bil ing commences when Amazon EC2 AMI instance is completely up and bil ing ends as 
soon as the instance starts to shutdown. 
B. Bil ing commences when Amazon EC2 initiates the boot sequence of an AMI instance and 
bil ing ends when the instance shuts down. 
C. Bil ing only commences only after 1 hour of uptime and bil ing ends when the instance terminates. 
D. Bil ing commences when Amazon EC2 initiates the boot sequence of an AMI instance and 
bil ing ends as soon as the instance starts to shutdown. 

Answer(s): 
Explanation: 
Bil ing commences when Amazon EC2 initiates the boot sequence of an AMI instance. Bil ing 
ends when the instance shuts down, which could occur through a web services command, by 
running "shutdown -h", or through instance failure. 
Reference: 
http://aws.amazon.com/ec2/faqs/#Bil ing 


QUESTION: 
AWS Elastic Load Balancer supports SSL termination. 

A. True. For specific availability zones only. 
B. False 
C. True. For specific regions only 
D. True. For all regions 

Answer(s): 
Explanation: 
You can configure your load balancer in ELB (Elastic Load Balancing) to use a SSL certificate in 
order to improve your system security.The load balancer uses the certificate to terminate and 
then decrypt requests before sending them to the back-end instances. Elastic Load Balancing 
uses AWS Identity and Access Management (IAM) to upload your certificate to your load 
balancer. 
Reference: 
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/US_SettingUpLoadB
alancerH TTPS.html 


QUESTION: 
A user has launched five instances with ELB. How can the user add the sixth EC2 instance to 
ELB? 

A. The user can add the sixth instance on the fly. 
B. The user must stop the ELB and add the sixth instance. 
C. The user can add the instance and change the ELB config file. 
D. The ELB can only have a maximum of five instances. 

Answer(s): 
Explanation: 
Elastic Load Balancing automatically distributes incoming traffic across multiple EC2 instances. 
You create a load balancer and register instances with the load balancer in one or more 
Availability Zones. The load balancer serves as a single point of contact for clients. This enables 
you to increase the availability of your application. You can add and remove EC2 instances from 
your load balancer as your needs change, without disrupting the overall flow of information. 
Reference: 
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/SvcIntro.html 

 

QUESTION: 
An organization has 500 employees. The organization wants to set up AWS access for each 
department. Which of the below mentioned options is a possible solution? 

A. Create IAM roles based on the permission and assign users to each role 
B. Create IAM users and provide individual permission to each 
C. Create IAM groups based on the permission and assign IAM users to the groups 
D. It is not possible to manage more than 100 IAM users with AWS  

Answer(s): 
Explanation: 
An IAM group is a collection of IAM users. Groups let the user specify permissions for a 
collection of users, which can make it easier to manage the permissions for those users. 
Reference: 
http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_WorkingWithGroupsAndUsers.html 


QUESTION: 
How long can you keep your Amazon SQS messages in Amazon SQS queues? 

A. From 120 secs up to 4 weeks 
B. From 10 secs up to 7 days 
C. From 60 secs up to 2 weeks 
D. From 30 secs up to 1 week 

Answer(s): 
Explanation: 
The SQS message retention period is configurable and can be set anywhere from 1 minute to 2 
weeks. The default is 4 days and once the message retention limit is reached your messages 
wil be automatically deleted. The option for longer message retention provides greater flexibility 
to allow for longer intervals between message production and consumption. 
Reference: 
https://aws.amazon.com/sqs/faqs/ 


QUESTION: 
In regard to DynamoDB, which of the following statements is correct? 

A. An Item should have at least two value sets, a primary key and another attribute. 
B. An Item can have more than one attributes. 
C. A primary key should be single-valued. 
D. An attribute can have one or several other attributes. 

Answer(s): 
Explanation: 
In Amazon DynamoDB, a database is a collection of tables. A table is a collection of items and 
each item is a collection of attributes. 
Reference: 
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DataModel.html 

 

QUESTION: 
Which one of the following statements is NOT an advantage of DyanamoDB being built on Solid 
State Drives: 

A. serve high-scale request workloads 
B. low request pricing 
C. high I/O performance of WebApp on EC2 instance 
D. low-latency response times 

Answer(s): 
Explanation: 
In DynamoDB, SSDs help achieve design goals of predictable low-latency response times for 
storing and accessing data at any scale. The high I/O performance of SSDs also enables to 
serve high-scale request workloads cost efficiently, and to pass this efficiency along in low 
request pricing. 
Reference: 
http://aws.amazon.com/dynamodb/faqs/ 


QUESTION: 10 
An organization has hosted an application on the EC2 instances. There will be multiple users 
connecting to the instance for setup and configuration of application. The organization is 
planning to implement certain security best practices. Which of the below mentioned pointers 
wil not help the organization achieve better security arrangement? 

A. Apply the latest patch of OS and always keep it updated. 
B. Al ow only IAM users to connect with the EC2 instances with their own secret access key. 
C. Disable the password based login for all the users. Al the users should use their own keys to 
connect with the instance securely. 
D. Create a procedure to revoke the access rights of the individual user when they are not 
required to connect to EC2 instance anymore for the purpose of application configuration. 

Answer(s): 
Explanation: 
Since AWS is a public cloud any application hosted on EC2 is prone to hacker attacks. It 
becomes extremely important for a user to setup a proper security mechanism on the EC2 
instances. A few of the security measures are listed below: 
Always keep the OS updated with the latest patch 
Always create separate users with in OS if they need to connect with the EC2 instances, create 
their keys and disable their password 
Create a procedure using which the admin can revoke the access of the user when the 
business work on the EC2 instance is completed 
Lock down unnecessary ports 
Audit any proprietary applications that the user may be running on the EC2 instance Provide 
temporary escalated privileges, such as sudo for users who need to perform occasional 
privileged tasks 
The IAM is useful when users are required to work with AWS resources and actions, such as 
launching an instance. It is not useful to connect (RDP / SSH) with an instance. 
Reference: 
http://aws.amazon.com/articles/1233/ 


QUESTION: 11 
A user is planning to make a mobile game which can be played online or offline and wil be 
hosted on EC2. The user wants to ensure that if someone breaks the highest score or they 
achieve some milestone they can inform all their colleagues through email. Which of the below 
mentioned AWS services helps achieve this goal? 

A. AWS Simple Workflow Service. 
B. AWS Simple Queue Service. 
C. Amazon Cognito 
D. AWS Simple Email Service. 




Answer(s): 
Explanation: 
Amazon Simple Email Service (Amazon SES) is a highly scalable and cost-effective email-
sending service for businesses and developers. It integrates with other AWS services, making it 
easy to send emails from applications that are hosted on AWS. 
Reference: 
http://aws.amazon.com/ses/faqs/ 


QUESTION: 12 
Which one of the following operations is NOT a DynamoDB operation? 

A. BatchWriteItem 
B. DescribeTable 
C. BatchGetItem 
D. BatchDeleteItem 

Answer(s): 
Explanation: 
In DynamoDB, DeleteItem deletes a single item in a table by primary key, but BatchDeleteItem 
doesn't exist. 
Reference: 
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/operationlist.html 


QUESTION: 13 
True or False: In DynamoDB, Scan operations are always eventual y consistent. 

A. No, scan is like Query operation 
B. Yes 
C. No, scan is strongly consistent by default 
D. No, you can optionally request strongly consistent scan. 

Answer(s): 
Explanation: 
In DynamoDB, Scan operations are always eventual y consistent.  
Reference: 
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html 


QUESTION: 14 
Regarding Amazon SNS, when you want to subscribe to a topic and receive notifications to your 
email, in the Protocol drop-down box, you should select _______. 

A. Email 
B. Message 
C. SMTP 
D. IMAP 

Answer(s): 
Explanation: 
In Amazon SNS, when you want to subscribe to a topic and receive notifications to your email, 
select Email in the Protocol drop-down box. Enter an email address you can use to receive the 
notification in the Endpoint field. 
Reference: 
http://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html 


QUESTION: 15 
In Amazon EC2, which of the following is the type of monitoring data for Amazon EBS volumes 
that is available automatically in 5-minute periods at no charge? 

A. Primary 
B. Basic 
C. Initial 
D. Detailed 

Answer(s): 
Explanation: 
Basic is the type of monitoring data (for Amazon EBS volumes) which is available automatically 
in 5-minute periods at no charge called. 
Reference: 
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/monitoring-volume-status.html 


QUESTION: 16 
In DynamoDB, to get a detailed listing of secondary indexes on a table, you can use the _____ 
action. 

A. DescribeTable 
B. BatchGetItem 
C. GetItem 
D. TableName 

Answer(s): 
Explanation: 
In DynamoDB, DescribeTable returns information about the table, including the current status of 
the table, when it was created, the primary key schema, and any indexes on the table. 

 

Reference: 
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html 


QUESTION: 17 
A user has launched an EC2 instance. However, due to some reason the instance was 
terminated. If the user wants to find out the reason for termination, where can he find the 
details? 

A. The user can get information from the AWS console, by checking the Instance description 
under the State transition reason label 
B. The user can get information from the AWS console, by checking the Instance description 
under the Instance Termination reason label 
C. The user can get information from the AWS console, by checking the Instance description 
under the Instance Status Change reason label 
D. It is not possible to find the details after the instance is terminated 

Answer(s): 
Explanation: 
An EC2 instance, once terminated, may be available in the AWS console for a while after 
termination. The user can find the details about the termination from the description tab under 
the label State transition reason. If the instance is stil running, there wil be no reason listed. If 
the user has explicitly stopped or terminated the instance, the reason wil be "User initiated 
shutdown". 
Reference: 
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_InstanceStraightToTerminated.
html 


QUESTION: 18 
___________ is a task coordination and state management service for cloud applications. 

A. Amazon SES 
B. Amazon SWF 
C. Amazon FPS 
D. Amazon SNS 

Answer(s): 
Explanation: 
Amazon Simple Workflow (Amazon SWF) is a task coordination and state management service 
for cloud applications. With Amazon SWF, you can stop writing complex glue-code and state 
machinery and invest more in the business logic that makes your applications unique. 
Reference: 
http://aws.amazon.com/swf/ 


QUESTION: 19 
When you create a table with a hash-and-range key, you must define one or more secondary 
indexes on that table. 

A. False, hash-range key is another name for secondary index 

 

B. False, it is optional 
C. True 
D. False, when you have Hash-Range key you cannot define Secondary index 

Answer(s): 
Explanation: 
When you create a table with a hash-and-range key in DynamoDB, you can also define one or 
more secondary indexes on that table. 
Reference: 
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LSI.html 


QUESTION: 20 
A user is planning to create a structured database in the cloud. Which of the below mentioned 
AWS offerings help the user achieve the goal? 

A. AWS DynamoDB 
B. AWS RDS 
C. AWS SimpleDB 
D. AWS RSD 

Answer(s): 
Explanation: 
AWS RDS is a managed database server offered by AWS, which makes it easy to set up, 
operate, and scale a relational database or structured data in cloud.  
Reference: 
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html 


QUESTION: 21 
A user has created a MySQL RDS instance with PIOPS. Which of the below mentioned 
statements wil help user understand the advantage of PIOPS? 

A. The user can achieve additional dedicated capacity for the EBS I/O with an enhanced RDS 
option 
B. It uses optimized EBS volumes and optimized configuration stacks 
C. It provides a dedicated network bandwidth between EBS and RDS 
D. It uses a standard EBS volume with optimized configuration the stacks 

Answer(s): 
Explanation: 
RDS DB instance storage comes in two types: standard and provisioned IOPS. Standard 
storage is allocated on the Amazon EBS volumes and connected to the user's DB instance. 
Provisioned IOPS uses optimized EBS volumes and an optimized configuration stack. It 
provides additional, dedicated capacity for the EBS I/O. 
Reference: 
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html 


QUESTION: 22 
A user is accessing an EC2 instance on the SSH port for IP 10.20.30.40. Which one is a secure 

 

way to configure that the instance can be accessed only from this IP? 

A. In the security group, open port 22 for IP 10.20.30.40/0 
B. In the security group, open port 22 for IP 10.20.30.40/32 
C. In the security group, open port 22 for IP 10.20.30.40/24  
D. In the security group, open port 22 for IP 10.20.30.40 

Answer(s): 
Explanation: 
In AWS EC2, while configuring a security group, the user needs to specify the IP address in 
CIDR notation. The CIDR IP range 10.20.30.40/32 says it is for a single IP 10.20.30.40. If the 
user specifies the IP as 10.20.30.40 only, the security group will not accept and ask it in a CIRD 
format.  
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-
security.html 


QUESTION: 23 
When a user is detaching an EBS volume from a running instance and attaching it to a new 
instance, which of the below mentioned options should be followed to avoid file system 
damage? 

A. Unmount the volume first 
B. Stop all the I/O of the volume before processing 
C. Take a snapshot of the volume before detaching 
D. Force Detach the volume to ensure that all the data stays intact 

Answer(s): 
Explanation: 
When a user is trying to detach an EBS volume, the user can either terminate the instance or 
explicitly remove the volume. It is a recommended practice to unmount the volume first to avoid 
any file system damage. 
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-
volume.html 


QUESTION: 24 
A user is planning to host a scalable dynamic web application on AWS. Which of the services 
may not be required by the user to achieve automated scalability? 

A. CloudWatch 
B. S3 
C. AutoScaling 
D. AWS EC2 instances 

Answer(s): 
Explanation: 
The user can achieve automated scaling by launching different EC2 instances and making them 
a part of an ELB. Cloudwatch wil be used to monitor the resources and based on the scaling 
need it wil trigger policies. AutoScaling is then used to scale up or down the instances. 
Reference: 
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/WhatIsAutoScaling.html 


QUESTION: 25 
Which one of the following data types does Amazon DynamoDB not support? 

A. Arrays 
B. String 
C. Binary 
D. Number Set 

Answer(s): 
Explanation: 
Amazon DynamoDB supports the following data types: 
Scalar data types (like Number, String, and Binary) 
Multi-valued types (like String Set, Number Set, and Binary Set). 
Reference: 
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DataModel.html#DataModel.Data Types 


QUESTION: 26 
Regarding Amazon SNS, you can send notification messages to mobile devices through any of 
the following supported push notification services, EXCEPT: 

A. Google Cloud Messaging for Android (GCM) 
B. Apple Push Notification Service (APNS) 
C. Amazon Device Messaging (ADM) 
D. Microsoft Windows Mobile Messaging (MWMM) 

Answer(s): 
Explanation: 
In Amazon SNS, you have the ability to send notification messages directly to apps on mobile 
devices. Notification messages sent to a mobile endpoint can appear in the mobile app as 
message alerts, badge updates, or even sound alerts. Microsoft Windows Mobile Messaging 
(MWMM) doesn't exist and is not supported by Amazon SNS. 
Reference: 
http://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html 


QUESTION: 27 
A user plans to use RDS as a managed DB platform. Which of the below mentioned features is 
not supported by RDS? 

A. Automated backup 
B. Automated scaling to manage a higher load 
C. Automated failure detection and recovery 
D. Automated software patching 

Answer(s): 
Explanation: AWS RDS provides a managed DB platform, which offers features, such as automated backup, 
patch management, automated failure detection and recovery. The scaling is not automated and 
the user needs to plan it with a few clicks. 
Reference: 
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html 


QUESTION: 28 
A user has not enabled versioning on an S3 bucket. What wil be the version ID of the object 
inside that bucket? 

A. 0 
B. There wil be no version attached 
C. Null 
D. Blank 

Answer(s): 
Explanation: 
S3 objects stored in the bucket before the user has set the versioning state have a version ID of 
nul . When the user enables versioning, the objects in the bucket do not change and their ID 
remains null. 
Reference: 
http://docs.aws.amazon.com/AmazonS3/latest/dev/AddingObjectstoVersionSuspendedBuckets.
html 


QUESTION: 29 
A user has created a queue named "myqueue" with SQS. There are four messages published 
to queue which are not received by the consumer yet. If the user tries to delete the queue, what 
wil happen? 

A. A user can never delete a queue manual y. AWS deletes it after 30 days of inactivity on 
queue 
B. It will initiate the delete but wait for four days before deleting until all messages are deleted 
automatically. 
C. It wil ask user to delete the messages first 
D. It wil delete the queue 

Answer(s): 
Explanation: 
SQS allows the user to move data between distributed components of applications so they can 
perform different tasks without losing messages or requiring each component to be always 
available. The user can delete a queue at any time, whether it is empty or not. It is important to 
note that queues retain messages for a set period of time. By default, a queue retains 
messages for four days. 
Reference: 
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/SQSConce
pts.html 


QUESTION: 30 

What happens if your application performs more reads or writes than your provisioned capacity? 

A. Nothing 
B. requests above your provisioned capacity wil be performed but you wil receive 400 error 
codes. 
C. requests above your provisioned capacity wil be performed but you will receive 200 error 
codes. 
D. requests above your provisioned capacity wil be throttled and you wil receive 400 error 
codes. 

Answer(s): 
Explanation: 
Speaking about DynamoDB, if your application performs more reads/second or writes/second 
than your table's provisioned throughput capacity allows, requests above your provisioned 
capacity wil be throttled and you wil receive 400 error codes. 
Reference: 
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIn
tro.html 

QUESTION: 31 
In relation to Amazon SQS, how can you ensure that messages are delivered in order? 

A. Increase the size of your queue 
B. Send them with a timestamp 
C. Give each message a unique id. 
D. AWS cannot guarantee that you wil receive messages in the exact order you sent them 

Answer(s): 
Explanation: 
Amazon SQS makes a best effort to preserve order in messages, but due to the distributed 
nature of the queue, AWS cannot guarantee that you will receive messages in the exact order 
you sent them. You typically place sequencing information or timestamps in your messages so 
that you can reorder them upon receipt. 
Reference: 
https://aws.amazon.com/items/1343?externalID=1343 


QUESTION: 32 
An organization has launched two applications: one for blogging and one for ECM on the same 
AWS Linux EC2 instance running in the AWS VPC. The organization has attached two private 
IPs (primary and secondary) to the above mentioned instance. The organization wants the 
instance OS to recognize the secondary IP address. How can the organization configure this? 

A. Use the ec2-net-utility package which updates routing tables, uses DHCP to refresh the 
secondary IP and adds the network interface. 
B. Use the ec2-net-utils package which wil configure an additional network interface and update 
the routing table 
C. Use the ec2-ip-update package which can configure the network interface as well as update 
the secondary IP with DHCP. 
D. Use the ec2-ip-utility package which can update the routing tables as well as refresh the secondary IP using DHCP. 

Answer(s): 
Explanation: 
A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It 
enables the user to launch AWS resources into a virtual network that the user has defined. With 
VPC the user can specify multiple private IP addresses for his instances. The number of 
network interfaces and private IP addresses that a user can specify for an instance depends on 
the instance type. This scenario helps when the user wants to host multiple websites on a single 
EC2 instance. After the user has assigned a secondary private IP address to his instance, he 
needs to configure the operating system on that instance to recognize the secondary private IP 
address. For AWS Linux, the ec2-net-utils package can take care of this step. It configures 
additional network interfaces that the user can attach while the instance is running, refreshes 
secondary IP addresses during DHCP lease renewal, and updates the related routing rules. 
Reference: 
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html 

QUESTION: 33 
What kind of service is provided by AWS DynamoDB? 

A. Relational Database 
B. NoSQL Database 
C. Dynamic Database 
D. Document Database 

Answer(s): 
Explanation: 
DynamoDB is a fast, fully managed NoSQL database service. 
Reference: 
http://aws.amazon.com/dynamodb/ 

QUESTION: 34 
In relation to Amazon SQS, how many queues and messages can you have per queue for each 
user? 

A. Unlimited 
B. 10 
C. 256 
D. 500 

Answer(s): 
Explanation: 
Amazon SQS supports an unlimited number of queues and unlimited number of messages per 
queue for each user. Please be aware that Amazon SQS automatical y deletes messages that 
have been in the queue for more than 4 days. 
Reference:  
https://aws.amazon.com/items/1343?externalID=1343 

 

QUESTION: 35 
Doug has created a VPC with CIDR 10.201.0.0/16 in his AWS account. In this VPC he has 
created a public subnet with CIDR block 10.201.31.0/24. While launching a new EC2 from the 
console, he is not able to assign the private IP address 10.201.31.6 to this instance. Which is 
the most likely reason for this issue? 

A. Private IP address 10.201.31.6 is not part of the associated subnet's IP address range. 
B. Private IP address 10.201.31.6 is blocked via ACLs in Amazon infrastructure as a part of 
platform security. 
C. Private address IP 10.201.31.6 is currently assigned to another interface. 
D. Private IP address 10.201.31.6 is reserved by Amazon for IP networking purposes. 

Answer(s): 
Explanation: 
In Amazon VPC, you can assign any Private IP address to your instance as long as it is: 
Part of the associated subnet's IP address range 
Not reserved by Amazon for IP networking purposes 
Not currently assigned to another interface 
Reference:  
http://aws.amazon.com/vpc/faqs/ 

QUESTION: 36 
Regarding Amazon SQS, are there restrictions on the names of Amazon SQS queues? 

A. No 
B. Yes. Queue names must be unique within an AWS account and you cannot use hyphens (-) 
and underscores (_) 
C. Yes. Queue names are limited to 80 characters and queue names must be unique within an 
AWS account 
D. Yes. Queue names are limited to 80 characters but queue names do not need to be unique 
within an AWS account 

Answer(s): 
Explanation: 
Queue names are limited to 80 characters. Alphanumeric characters plus hyphens (-) and 
underscores (_) are allowed. Queue names must be unique within an AWS account. After you 
delete a queue, you can reuse the queue name. 
Reference:  
https://aws.amazon.com/sqs/faqs/ 

QUESTION: 37 
In Amazon SNS, to send push notifications to mobile devices using Amazon SNS and ADM, you 
need to obtain the following, except: 

A. Client secret 
B. Client ID 
C. Device token 
D. Registration ID 

Answer(s): 
Explanation: 
To send push notifications to mobile devices using Amazon SNS and ADM, you need to obtain 
the following: Registration ID and Client secret. 
Reference:  
http://docs.aws.amazon.com/sns/latest/dg/SNSMobilePushPrereq.html 

QUESTION: 38 
Regarding Amazon SNS, to begin using Amazon SNS mobile push notifications, you first need 
__________that uses one of the supported push notification services: APNS, GCM, or ADM. 

A. an access policy for the mobile endpoints 
B. to active push notification service of Amazon SNS 
C. to know the type of mobile device operating system 
D. an app for the mobile endpoints 

Answer(s): 
Explanation: 
In Amazon SNS, to begin using Amazon SNS mobile push notifications, you first need an app 
for the mobile endpoints that uses one of the supported push notification services: APNS, GCM, 
or ADM. After you've registered and configured the app to use one of these services, you 
configure Amazon SNS to send push notifications to the mobile endpoints. 
Reference:  
http://docs.aws.amazon.com/sns/latest/dg/SNSMobilePush.html 

QUESTION: 39 
How many types of block devices does Amazon EC2 support? 

A. 5 
B. 1 
C. 2 
D. 4 

Answer(s): 
Explanation: 
Amazon EC2 supports 2 types of block devices. 
Reference: 
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/block-device-mapping-
concepts.html 

QUESTION: 40 
ExamKiller (with AWS account ID 111122223333) has created 50 IAM users for its 
organization's employees. ExamKil er wants to make the AWS console login URL for all IAM 
users as: https:// 
examkil er.signin.aws.amazon.com/console/. How can this be configured? 

A. Create a bucket with the name ExamKil er and map it with the IAM alias 
B. It is not possible to have capital letters as a part of the alias name 

C. The user needs to use Route 53 to map the ExamKil er domain and IAM URL 
D. For the AWS account, create an alias ExamKil er for the IAM login 

Answer(s): 
Explanation: 
If a user wants the URL of the AWS IAM sign-in page to have the company name instead of the 
AWS account ID, he can create an alias for his AWS account ID. The alias must be unique 
across all Amazon Webservices products and contain only digits, lowercase letters, and 
hyphens.  
Reference:  
http://docs.aws.amazon.com/IAM/latest/UserGuide/AccountAlias.html 

QUESTION: 41 
Can a user get a notification of each instance start / terminate configured with Auto Scaling? 

A. Yes, always 
B. No 
C. Yes, if configured with the Auto Scaling group 
D. Yes, if configured with the Launch Config 

Answer(s): 
Explanation: 
The user can get notifications using SNS if he has configured the notifications while creating the 
Auto Scaling group. 
Reference: 
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/GettingStartedTutorial.html 
QUESTION: 42 
AutoScaling is configured with 3 AZs. Each zone has 5 instances running. If AutoScaling wants 
to terminate an instance based on the policy action, which instance wil it terminate first? 

A. Terminate the first launched instance 
B. Randomly select the instance for termination 
C. Terminate the instance from the AZ which does not have a high AWS load 
D. Terminate the instance from the AZ which has instances running near to the bil ing hour 

Answer(s): 
Explanation: 
Before Auto Scaling selects an instance to terminate, it first identifies the Availability Zone that 
has more instances than the other Availability Zones used by the group. If all the Availability 
Zones have the same number of instances, it identifies a random Availability Zone.  
Reference:  
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/us-termination-policy.html 

QUESTION: 43 
In regard to DynamoDB, can I delete local secondary indexes? 

A. Yes, if it is a primary hash key index 

B. No 
C. Yes, if it is a local secondary indexes 
D. Yes, if it is a Global secondary indexes 

Answer(s): 
Explanation: 
In DynamoDB, an index cannot be modified once it is created.  
Reference:  
http://aws.amazon.com/dynamodb/faqs/#security_anchor 

QUESTION: 44 
You need to develop and run some new applications on AWS and you know that Elastic 
Beanstalk and CloudFormation can both help as a deployment mechanism for a broad range of 
AWS resources. Which of the following statements best describes the differences between 
Elastic Beanstalk and CloudFormation? 

A. Elastic Beanstalk uses Elastic load balancing and CloudFormation doesn't. 
B. CloudFormation is faster in deploying applications than Elastic Beanstalk. 
C. CloudFormation is much more powerful than Elastic Beanstalk, because you can actual y 
design and script custom resources 
D. Elastic Beanstalk is faster in deploying applications than CloudFormation. 

Answer(s): 
Explanation: 
These services are designed to complement each other. AWS Elastic Beanstalk provides an 
environment to easily develop and run applications in the cloud. It is integrated with developer 
tools and provides a one-stop experience for you to manage the lifecycle of your applications. 
AWS CloudFormation is a convenient deployment mechanism for a broad range of AWS 
resources. It supports the infrastructure needs of many different types of applications such as 
existing enterprise applications, legacy applications, applications built using a variety of AWS 
resources and container-based solutions (including those built using AWS Elastic Beanstalk). 
AWS CloudFormation introduces two new concepts: The template, a JSON-format, text-based 
file that describes all the AWS resources you need to deploy to run your application and the 
stack, the set of AWS resources that are created and managed as a single unit when AWS 
CloudFormation instantiates a template. 
Reference:  
http://aws.amazon.com/cloudformation/faqs/ 



QUESTION: 45 
Can you SSH to your private machines that reside in a VPC from outside without elastic IP? 

A. Yes, but only if you have direct connect or vpn 
B. Only if you are using a non-US region 
C. Only if you are using a US region 
D. No 

Answer(s): 
Explanation: 
The instances that reside in the private subnets of your VPC are not reachable from the Internet, meaning that is not possible to ssh into them. To interact with them you can use a 
bastion server, located in a public subnet, that will act as a proxy for them. 
You can also connect if you have direct connect or vpn. 
Reference:  
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html 

QUESTION: 46 
Does AWS CloudFormation support Amazon EC2 tagging? 

A. It depends if the Amazon EC2 tagging has been defined in the template. 
B. No, it doesn't support Amazon EC2 tagging. 
C. No, CloudFormation doesn't support any tagging 
D. Yes, AWS CloudFormation supports Amazon EC2 tagging 

Answer(s): 
Explanation: 
In AWS CloudFormation, Amazon EC2 resources that support the tagging feature can also be 
tagged in an AWS template. The tag values can refer to template parameters, other resource 
names, resource attribute values (e.g. addresses), or values computed by simple functions 
(e.g., a concatenated list of strings). 
Reference:  
http://aws.amazon.com/cloudformation/faqs/ 

QUESTION: 47 
A user has created a MySQL RDS instance. Which of the below mentioned options is 
mandatory to configure while creating an instance? 

A. Multi AZ deployment setup 
B. Automated backup window 
C. Availability Zone 
D. Maintenance window 

Answer(s): 
Explanation: 
When creating an RDS instance, the user needs to specify whether it is Multi AZ or not. If the 
user does not provide the value for the zone, the maintenance window or automated backup 
window, RDS wil automatical y select the value. 
Reference:  
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html 

QUESTION: 48 
A user has enabled the automated backup, but not specified the backup window. What wil RDS 
do in this case? 

A. Wil throw an error on instance launch 
B. RDS wil take 3 AM - 3:30 AM as the default window 
C. RDS assigns a random time period based on the region 
D. Wil not allow to launch a DB instance 

Answer(s): 
Explanation: 
If the user does not specify a preferred backup window while enabling an automated backup, 
Amazon RDS assigns a default 30-minute backup window which is selected at random from an 
8-hour block of time per region. 
Reference: 
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.BackingUpAndRestoring
AmazonR DSInstances.html 

QUESTION: 49 
A user is planning to host a web server as well as an app server on a single EC2 instance which 
is a part of the public subnet of a VPC. How can the user setup to have two separate public IPs 
and separate security groups for both the application as well as the web server? 

A. Launch a VPC instance with two network interfaces. Assign a separate security group to 
each and AWS wil assign a separate public IP to them. 
B. Launch VPC with two separate subnets and make the instance a part of both the subnets. 
C. Launch a VPC instance with two network interfaces. Assign a separate security group and 
elastic IP to them. 
D. Launch a VPC with ELB such that it redirects requests to separate VPC instances of the 
public subnet. 

Answer(s): 
Explanation: 
If you need to host multiple websites(with different IPs) on a single EC2 instance, the following 
is the suggested method from AWS. 
Launch a VPC instance with two network interfaces 
Assign elastic IPs from VPC EIP pool to those interfaces (Because, when the user has attached 
more than one network interface with an instance, AWS cannot assign public IPs to them.) 
Assign separate Security Groups if separate Security Groups are needed This scenario also 
helps for operating network appliances, such as firewalls or load balancers that have multiple 
private IP addresses for each network interface.  
Reference:  
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html 

QUESTION: 50 
An online gaming site asked you if you can deploy a database that is a fast, highly scalable 
NoSQL database service in AWS for a new site that he wants to build. Which database should 
you recommend? 

A. Amazon Redshift 
B. Amazon SimpleDB 
C. Amazon DynamoDB 
D. Amazon RDS 

Answer(s): 
Explanation: 
Amazon DynamoDB is ideal for database applications that require very low latency and predictable performance at any scale but don't need complex querying capabilities like joins or 
transactions. Amazon DynamoDB is a fully-managed NoSQL database service that offers high 
performance, predictable throughput and low cost. It is easy to set up, operate, and scale. With 
Amazon DynamoDB, you can start small, specify the throughput and storage you need, and 
easily scale your capacity requirements on the fly. Amazon DynamoDB automatically partitions 
data over a number of servers to meet your request capacity. In addition, DynamoDB 
automatically replicates your data synchronously across multiple Availability Zones within an 
AWS Region to ensure high-availability and data durability. 
Reference:  
https://aws.amazon.com/running_databases/#dynamodb_anchor 

QUESTION: 51 
How long are the messages kept on an SQS queue by default? 

A. If a message is not read, it is never deleted 
B. 2 weeks 
C. 1 day 
D. 4 days 

Answer(s): 
Explanation: 
The SQS message retention period is configurable and can be set anywhere from 1 minute to 2 
weeks. The default is 4 days and once the message retention limit is reached your messages 
wil be automatically deleted. The option for longer message retention provides greater flexibility 
to allow for longer intervals between message production and consumption. 
Reference:  
https://aws.amazon.com/sqs/faqs/ 

QUESTION: 52 
Regarding Amazon SWF, the coordination logic in a workflow is contained in a software 
program called a ________. 

A. Handler 
B. Decider 
C. Cordinator 
D. Worker 

Answer(s): 
Explanation: 
In Amazon SWF, the coordination logic in a workflow is contained in a software program called 
a decider. The decider schedules activity tasks, provides input data to the activity workers, 
processes events that arrive while the workflow is in progress, and ultimately ends (or closes) 
the workflow when the objective has been completed. 
Reference:  
http://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dg-intro-to-swf.html 

QUESTION: 53 
A user has attached one RDS security group with 5 RDS instances. The user has changed the ingress rule for the security group. What wil be the initial status of the ingress rule? 

A. Approving 
B. Implementing 
C. Authorizing 
D. It is not possible to assign a single group to multiple DB instances 

Answer(s): 
Explanation: 
When the user makes any changes to the RDS security group the rule status will be authorizing 
for some time until the changes are applied to all instances that the group is connected with. 
Once the changes are propagated the rule status wil change to authorized. 
Reference: 
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithSecurityGroup
s.html 

QUESTION: 54 
A user has attached an EBS volume to a running Linux instance as a "/dev/sdf" device. The 
user is unable to see the attached device when he runs the command "df -h". What is the 
possible reason for this? 

A. The volume is not in the same AZ of the instance 
B. The volume is not formatted 
C. The volume is not attached as a root device 
D. The volume is not mounted 

Answer(s): 
Explanation: 
When a user creates an EBS volume and attaches it as a device, it is required to mount the 
device. If the device/volume is not mounted it wil not be available in the listing.  
Reference:  
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html 

QUESTION: 55 
A user has setup an application on EC2 which uses the IAM user access key and secret access 
key to make secure calls to S3. The user wants to temporarily stop the access to S3 for that 
IAM user. What should the root owner do? 

A. Delete the IAM user 
B. Change the access key and secret access key for the users 
C. Disable the access keys for the IAM user 
D. Stop the instance 

Answer(s): 
Explanation: 
If the user wants to temporarily stop the access to S3 the best solution is to disable the keys. 
Deleting the user wil result in a loss of all the credentials and the app will not be useful in the 
future. If the user stops the instance IAM users can stil access S3. The change of the key does 
not help either as they are stil active. The best possible solution is to disable the keys. 



반응형


반응형


AWS Certified developer associate exam samples 



구글링 해서 얻은 샘플 시험문제입니다.

혹시 도움이 되시면 참고 하세요.

그리고 이것 말고 다른 자료 있으면 공유 부탁 드립니다. ( solkit70@gmail.com )




https://blog.cloudthat.com/sample-questions-for-amazon-web-services-certified-developer-associate-certification/

 

AWS Fundamentals

1. What is a worker with respect to SWF?

a. Workers are programs that interact with Amazon SWF to get tasks, process the received task, and return the results
b. Workers are ec2 instances which can create s3 buckets and process SQS messages
c. Workers are the people in the warehouse pocessing orders for amazon
d. Workers are the component of IIS which run on windows platform under the w3wp.exe process

2. Which of the below statements about DynamoDB are true? (Select any 2)

a. DynamoDB uses a Transaction-Level Read Consistency
b. DynamoDB uses optimistic concurrency control
c. DynamoDB uses conditional writes for consistency
d. DynamoDB restricts an item access during reads
e. DynamoDB restricts item access during writes

Designing and Developing

1. A Security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDB table. Each sample involves 1 kb of data, and the data writes are evenly distributed over time.

How much write throughput is required for the target table?

a. 6000
b. 10
c. 3600
d. 60
e. 600

2. Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

a. Eventual Consistent Reads
b. Conditional reads for consistency
c. Strongly Consistent Reads
d. Not possible

3. You run a Query operation which returned all the data attributes for the selected items. You are only interested in seeing a few attributes. How do you achieve this in DynamoDB?

a. This is not possible
b. Use ProjectExpression
c. Use ExpressionAttribute
d. Use ProjectionExpression

Deployment and Security

1.     AWS Elastic Beanstalk currently supports which of the following platforms? (select any 2)

a. Java with Apache
b. IBM with Websphere
c. .Net
d. Perl

 2. Which of the following features allow organizations to leverage a commercial federation server as an identity bridge, providing secure single sign-on into the AWS console without storing user keys and without additional passwords or sign-on?

a. Web Identification Services
b. Web Identity Federation
c. Active Directory Authentication Services
d. SAML federation

3. Your web service is burning expensive CPU cycles by constantly polling SQS queues for messages. How can you avoid this?

a. Use Elasticache to cache the messages, rather than SQS.
b. Enable SQS Long Polling
c. Modify web service code to only poll a few minutes
d. SQS automatically pushes messages to the web service, so this should not be a problem

Debugging

1.     The output named BackupLoadBalancerDNSName returns the DNS name of the resource with the logical ID of BackupLoadBalancer.

Which of the following represents a valid AWS CloudFormation Template?

a. “Outputs” : {
“BackupLoadBalancerDNSName” : {
“Description”: “The DNSName of the backup load balancer”,
“Value”: { “Ls::GetAtt” : [ “BackupLoadBalancer”, “DNS” ]},
}

b. “Outputs” : {
“BackupLoadBalancerDNSName” : {
“Description”: “The DNSName of the backup load balancer”,
“Value”: { “Fn::GetAtt” : [ “BackupLoadBalancer”, “DNSName” ]},
}

c. “Outputs” : {
“BackupLoadBalancerDNSName” : {
“Description”: “The DNSName of the backup load balancer”,
“Value”: { “Fn::PostAtt” : [ “BackupLoadBalancer”, “Name” ]},
}

d. “Outputs” : {
“BackupLoadBalancerDNSName” : {
“Description”: “The DNSName of the backup load balancer”,
“Value”: { “Fn::GetAtt” : [ “BackupLoadBalancer”,  ]},
}

2. According to below IAM Policy which is the most appropriate possibility?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

{

"Version": "2012-10-17",

 

"Statement": [

 

{

 

"Sid": "Stmt1459162621000",

 

"Effect": "Allow",

 

"Action": ["sns:CreateTopic", "sns:Subscribe","sns:DeleteTopic"],

 

"Resource": [ "*" ]

 

},

 

{

 

"Effect": "Deny",

 

"Action": [ "sns:DeleteTopic"],

 

"Resource": [ "*" ]

 

}

 

]

 

}

a. User can perform CreateTopic,Subscribe and DeleteTopic
b. User  is denied  to perform only DeleteTopic

c. User can perform CreateTopic and Subscribe but denied to perform DeleteTopic operation
d. The above policy is invalid

Answers:

AWS Fundamentals

1.     a

2.     b,c

Designing and Developing

1.     b

2.     c

3.     d

Deployment and Security

1.     a,c

2.     d

3.     b

Debugging

1.      b

2.      c

 

https://blog.cloudthat.com/preparing-for-aws-certified-developer-certification-exam/

 

http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

 

http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScanGuidelines.html

 

http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-sticky-sessions.html

 




Question NO 18

A startup s photo-sharing site is deployed in a VPC. An ELB distributes web traffic across two subnets. ELB session stickiness is configured to use the AWS-generated session cookie, with a session TTL of 5 minutes. The web server Auto Scaling Group is configured as: min-size=4, max-size=4. The startups preparing for a public launch, by running load-testing software installed on a single EC2 instance running in us-west-2a. After 60 minutes of load-testing, the web server logs show: Which recommendations fan help ensure load-testing HTTP requests are evenly distributed across the four web servers? Choose 2 answers.

|Options

A. Launch and run the load-tester EC2 instance from us-east-1 instead.
B. Re-configure the load-testing software to re-resolve DNS for each web request.
C. Use a 3rd-party load-testing service which offers globally-distributed test clients.
D. Configure ELB and Autoscaling to distribute across us-west-2a and us-west-2f.
E. Configure ELB session stickiness to use the app-specific session cookie.

 

 

Answer: B,E 

 

 

Which statements about DynamoDBare true? Choose 2 answers

A. DynamoDBuses optimistic concurrency control
B. DynamoDBuses a pessimistic locking model
C. DynamoDBrestricts item access during reads
D. DynamoDBrestricts item access during writes
E. DynamoDBuses conditional writes for consistency


Answer: A,E

 

 

AWS-Certified-Developer-Associate Exam Dumps Detail: AWS-Certified-Developer-Associate Real Questions

NO.1 EBS Snapshots occur _____

A.  Synchronously
B.  Asynchronously
C.  Weekly

Answer: B

 

 

While creating the snapshots using the API, which Action should I be using?


A.  DeploySnapshot

B.  CreateSnapshot

C.  MakeSnapShot

D.  Fresh Snapshot

 

Answer: B

 

 

https://tutorialsnation.com/aws-certification-dumps

 

255 Questions




 

http://m8010-241-dumps-pdf.blogspot.com/2016/03/amazon-aws-certified-developer.html

 

NO.1 Which features can be used to restrict access to data in S3? Choose 2 answers
A. Set an S3 ACL on the bucket or the object.
B. Set an S3 Bucket policy.
C. Use S3 Virtual Hosting
D. Create a CloudFront distribution for the bucket
E. Enable IAM Identity Federation.
Answer: A,B

AWS-Certified-Developer-Associate demo  

NO.2 Which of the following services are included at no additional cost with the use of the AWS
platform? Choose 2 answers
A. Auto Scaling
B. Elastic Load Balancing
C. Simple Workflow Service
D. CloudFormation
E. Elastic Compute Cloud
F. Simple Storage Service
Answer: A,D

AWS-Certified-Developer-Associate Practice Test  

NO.3 What is one key difference between an Amazon EBS-backed and an instance-store backed
instance?
A. Virtual Private Cloud requires EBS backed instances
B. Instance-store backed instances can be stopped and restarted.
C. Auto scaling requires using Amazon EBS-backed instances.
D. Amazon EBS-backed instances can be stopped and restarted
Answer: D

NO.4 What item operation allows the retrieval of multiple items from a DynamoDB table in a single
API call?
A. BatchGetItem
B. GetItemRange
C. GetMultipleItems
D. GetItem
Answer: A

AWS-Certified-Developer-Associate study guide   
AWS-Certified-Developer-Associate test answers  

NO.5 How can software determine the public and private IP addresses of the Amazon EC2 instance
that it is running on?
A. Query the local instance metadata.
B. Use ipconfig or ifconfig command.
C. Query the local instance userdata.
D. Query the appropriate Amazon CloudWatch metric.
Answer: A

AWS-Certified-Developer-Associate dumps torrent  

NO.6 What is the maximum number of S3 Buckets available per AWS account?
A. 500 per account
B. there is no limit
C. 100 per account
D. 100 per IAM user
E. 100 per region
Answer: E

 

 

http://www.certificationking.com/download/Amazon-AWS.htm

 

 

AWS_certified_developer_associate_examsample

 

Which of the following statements about SQS is true?

A. Messages will be delivered exactly once and messages will be delivered in First in, First out order

B. Messages will be delivered exactly once and message delivery order is indeterminate

C. Messages will be delivered one or more times and messages will be delivered in First in, First out order

D. Messages will be delivered one or more times and message delivery order is indeterminate

EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

A. can be used to launch EC2 instances in any AWS region

B. can only be used to launch EC2 instances in the same country as the AMI is stored

C. can only be used to launch EC2 instances in the same AWS region as the AMI is stored

D. can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored

Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end- to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number of empty responses?

A. Set the imaging queue VisibilityTimeout attribute to 20 seconds

B. Set the imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds

C. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds

D. Set the DelaySeconds parameter of a message to 20 seconds

You attempt to store an object in the US-STANDARD region in Amazon S3, and receive a confirmation that it has been successfully stored. You then immediately make another API call and attempt to read this object. S3 tells you that the object does not exist. What could explain this behavior?

A. US-STANDARD uses eventual consistency and it can take time for an object to be readable in a bucket.

B. Objects in Amazon S3 do not become visible until they are replicated to a second region.

C. US-STANDARD imposes a 1 second delay before new objects are readable

D. You exceeded the bucket object limit, and once this limit is raised the object will be visible.

You have reached your account limit for the number of CloudFormation stacks in a region. How do you increase your limit?

A. Make an API call

B. Contact AWS

C. Use the console

D. You cannot increase your limit

Which statements about DynamoDB are true? (Pick 2 correct answers)

A. DynamoDB uses a pessimistic locking model

B. DynamoDB uses optimistic concurrency control

C. DynamoDB uses conditional writes for consistency

D. DynamoDB restricts item access during reads

E.  DynamoDB restricts item access during writes

 

 

 

 

 

1) Your CloudFormation template launches a two-tier web application in us-east-1. When you attempt to create a development stack in us-west-1, the process fails.

What could be the problem?

A) The AMIs referenced in the template are not available in us-west-1.

B) The IAM roles referenced in the template are not valid in us-west-1.

C) Two ELB Classic Load Balancers cannot have the same Name tag.

D) CloudFormation templates can be launched only in a single region.

 

 

2) Your application reads commands from an SQS queue and sends them to web services hosted by your partners. When a partner's endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.

How can you accommodate the partners' broken web services without wasting your resources?

A) Create a delay queue and set DelaySeconds to 30 seconds.

B) Requeue the message with a VisibilityTimeout of 30 seconds.

C) Create a dead letter queue and set the Maximum Receives to 3.

D) Requeue the message with a DelaySeconds of 30 seconds.

 

 

3) Your application must write to an SQS queue. Your corporate security policies require that AWS credentials are always encrypted and are rotated at least once a week.

How can you securely provide credentials that allow your application to write to the queue?

A) Have the application fetch an access key from an Amazon S3 bucket at run time.

B) Launch the application's Amazon EC2 instance with an IAM role.

C) Encrypt an access key in the application source code.

D) Enroll the instance in an Active Directory domain and use AD authentication.

 

 

4) Which operation could return temporarily inconsistent results?

A) Getting an object from Amazon S3 after it was initially created

B) Selecting a row from an Amazon RDS database after it was inserted

C) Selecting a row from an Amazon RDS database after it was deleted

D) Getting an object from Amazon S3 after it was deleted

 

 

5) You are creating a DynamoDB table with the following attributes:

 PurchaseOrderNumber (partition key)  CustomerID

 PurchaseDate

 TotalPurchaseValue

 

One of your applications must retrieve items from the table to calculate the total value of purchases for a particular customer over a date range.

 

What secondary index do you need to add to the table?

A) Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute

B) Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute

C) Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute

D) Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute

 

 

6) Your CloudFormation template has the following Mappings section:

"Mappings" : {

  "RegionMap" : {

    "us-east-1"

    "us-west-1"

: { "32" : "ami-6411e20d"},

: { "32" : "ami-c9c7978c"}

} }

 

 

Which JSON snippet will result in the value "ami-6411e20d" when a stack is launched in us-east-1?

A) { "Fn::FindInMap" : [ "Mappings", { "RegionMap" : ["us-east-1", "us-west-1"] }, "32"]}

B) { "Fn::FindInMap" : [ "Mappings", { "Ref" : "AWS::Region" }, "32"]}

C) { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "32"]}

D) { "Fn::FindInMap" : [ "RegionMap", { "RegionMap" : "AWS::Region" }, "32"]}

 

 

7) Your web application reads an item from your DynamoDB table, changes an attribute, and then writes the item back to the table. You need to ensure that one process doesn't overwrite a simultaneous change from another process.

How can you ensure concurrency?

A) Implement optimistic concurrency by using a conditional write.

B) Implement pessimistic concurrency by using a conditional write.

C) Implement optimistic concurrency by locking the item upon read.

D) Implement pessimistic concurrency by locking the item upon read.

 

 

8) Your application triggers events that must be delivered to all your partners. The exact partner list is constantly changing: some partners run a highly available endpoint, and other partners’ endpoints are online only a few hours each night. Your application is mission-critical, and communication with your partners must not introduce delay in its operation. A delay in delivering the event to one partner cannot delay delivery to other partners.

What is an appropriate way to code this?

A) Implement an Amazon SWF task to deliver the message to each partner. Initiate an Amazon SWF workflow execution.

B) Send the event as an Amazon SNS message. Instruct your partners to create an HTTP. Subscribe their HTTP endpoint to the Amazon SNS topic.

C) Create one SQS queue per partner. Iterate through the queues and write the event to each one. Partners retrieve messages from their queue.

D) Send the event as an Amazon SNS message. Create one SQS queue per partner that subscribes to the Amazon SNS topic. Partners retrieve messages from their queue.

 

 

9) You have reached your account limit for the number of CloudFormation stacks in a region.

How do you increase your limit?

A) Use the AWS Command Line Interface.

B) Send an email to limits@amazon.com with the subject “CloudFormation.”

C) Use the Support Center in the AWS Management Console.

D) All service limits are fixed and cannot be increased.

 

 

10) You have a three-tier web application (web, app, and data) in a single Amazon VPC. The web and app tiers each span two Availability Zones, are in separate subnets, and sit behind ELB Classic Load Balancers. The data tier is a Multi-AZ Amazon RDS MySQL database instance in database subnets. When you call the database tier from your app tier instances, you receive a timeout error.

What could be causing this?

A) The IAM role associated with the app tier instances does not have rights to the MySQL database.

B) The security group for the Amazon RDS instance does not allow traffic on port 3306 from the app

instances.

C) The Amazon RDS database instance does not have a public IP address.

D) There is no route defined between the app tier and the database tier in the Amazon VPC.

 



 

Answers

 

1) A – AMIs are stored in a region and cannot be accessed in other regions. To use the AMI in another region, you must copy it to that region. IAM roles are valid across the entire account.

 

2) C – After a message is taken from the queue and returned for the maximum number of retries, it is automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

 

3) B – IAM roles are based on temporary security tokens, so they are rotated automatically. Keys in the source code cannot be rotated (and are a very bad idea). It’s impossible to retrieve credentials from an S3 bucket if you don’t already have credentials for that bucket. Active Directory authorization will not grant access to AWS resources.

 

4) D – S3 has eventual consistency for overwrite PUTS and DELETES.

 

5) C – The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

 

6) C – Learn how to create and reference mappings here.

 

7) A – Optimistic concurrency depends on checking a value upon save to ensure that it has not changed. Pessimistic concurrency prevents a value from changing by locking the item or row in the database. DynamoDB does not support item locking, and conditional writes are perfect for implementing optimistic concurrency.

 

8) D – There are two challenges here: the command must be “fanned out” to a variable pool of partners, and your app must be decoupled from the partners because they are not highly available. Sending the command as an SNS message achieves the fan-out via its publication/subscribe model, and using an SQS queue for each partner decouples your app from the partners. Writing the message to each queue directly would cause more latency for your app and would require your app to monitor which partners were active. It would be difficult to write an Amazon SWF workflow for a rapidly changing set of partners.

 

9) C – The Support Center in the AWS Management Console allows customers to request limit increases by creating a case.

10) B – Security groups block all network traffic by default, so if a group is not correctly configured, it can lead to a timeout error. MySQL security, not IAM, controls MySQL security. All subnets in an Amazon VPC have routes to all other subnets. Internal traffic within an Amazon VPC does not require public IP addresses.

 

10) B – Security groups block all network traffic by default, so if a group is not correctly configured, it can lead to a timeout error. MySQL security, not IAM, controls MySQL security. All subnets in an Amazon VPC have routes to all other subnets. Internal traffic within an Amazon VPC does not require public IP addresses.

 

http://free-braindumps.com/amazon/free-aws-certified-developer-associate-braindumps.html?p=2

 


 

반응형

[AWS Certificate] Developer - VPC memo

2017. 11. 29. 10:56 | Posted by 솔웅


반응형


VPC (*****) Overview (Architect, Developer and Sysop)



Think of a VPC as a virtual data center in the cloud.


Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.


You can easily customize the network configuration for your Amazon Virtual Private Cloud. For example, you can create a public-facing subnet for your webservers that has access to the Internet, and place your backend systems such as databases or application servers in a private-facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet.


Additionally, you can create a Hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter.





What can you do with a VPC?


- Launch instances into a subnet of your choosing

- Assign custom IP address ranges in each subnet

- Configure route tables between subnets

- Create internet gateway and attach it to our VPC

- Much better security control over your AWS resources

- Instance security groups

- Subnet network access control list (ACLS)



Default VPC vs. Custom VPC


- Default VPC is user friendly, allowing you to immediately deploy instances.

- All Subnets in default VPC have a route out to the internet

- Each EC2 instance has both a public and private IP address



VPC Peering

- Allows you to connect one VPC with another via a direct network route using private IP addresses

- Instances behave as if they were on the same private network

- You can peer VPC's with other AWS accounts as well as with other VPCs in the same account.

- Peering is in a star configuration : i.e. 1 central VPC peers with 4 others. NO TRANSITIVE PEERING!!!




Exam Tips


- Think of a VPC as a logical datacenter in AWS.

- Consistes of IGWs (or Virtual Private Gateways), Route Tables, Network Access Control Lists, Subnets, and Security Groups

- 1 Subnet = 1 Availability Zone

- Security Groups are Stateful; Network Access Control Lists are Stateless

- NO TRANSITIVE PEERING


===================================


* Create VPC





Automatically created Route Tables, Network ACLs and Security Groups


Create 1st Subnet - 10.0.2.0-us-east-1a


VPCs and Subnet  - http://docs.aws.amazon.com/ko_kr/AmazonVPC/latest/UserGuide/VPC_Subnets.html 


Create 2nd Subnet - 10.0.2.0-us-east-1b



* Internet Gateway

Create Internet Gateway - Attach the VPC

1 VPC can be assigned to 1 Internet Gateway (*****)



* Route Table

Create new route table with the VPC

-> Navigate to Routes tab in Route Table -> Edit -> Add another route 0.0.0.0/0 - Target = above internet gateway -> Save

Add another route ::/0 - Target = above gateway - Save


-> Navigate to Subnet Associations tab -> Edit -> select first one as main


Go to Subnets - Set Auto-assign Public IP to Yes for first one

-> Subnet Actions -> Modify auto-assign IP settings -> Check Enable auto-assign public IPv4 address



* Create New EC2 Instance


Select the VPC for Network, Select Subnet (first one), 


Create 2nd EC2 instance - Select the VPC for Network, Select Subnet (2nd one), 


1st Instance has public IP address

2nd Instance has no public IP address


* Open a Terminal


1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@34.228.40.70 -i EC2KeyPair.pem.txt 

The authenticity of host '34.228.40.70 (34.228.40.70)' can't be established.

ECDSA key fingerprint is SHA256:CNhUvY2BVwpZrGXQOE/SWocZS17IKYP8xKWKApE6P9c.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '34.228.40.70' (ECDSA) to the list of known hosts.


       __|  __|_  )

       _|  (     /   Amazon Linux AMI

      ___|\___|___|


https://aws.amazon.com/amazon-linux-ami/2017.09-release-notes/

[ec2-user@ip-10-0-1-232 ~]$ sudo su

[root@ip-10-0-1-232 ec2-user]# yum update -y





=========================================================


Network Address Translation (NAT)



NAT Instances & NAT Gateways



http://docs.aws.amazon.com/ko_kr/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html



Exam Tips - NAT instances


- When creating a NAT instance, Disable Source/Destination Check on the Instance

- NAT instances must be in a public subnet

- There must be a route out of the private subnet to the NAT instance, in order for this to work.

- The amount of traffic that NAT instances can support depends on the instance size. If you are bottlenecking, increase the instance size.

- You can create high availability using Autoscaling Groups, multiple subnets in different AZs, and a script to automate failover

- Behind a security group





Exam Tips - NAT Gateways


- Preferred by the enterprise

- Scale automatically up to 10Gbps

- No need to patch

- Not associated with security groups

- Automatically assigned a public ip address

- Remember to update your route tables

- No need to disable Source/Destination Checks

- More secure than a NAT instance




=========================================


Network Access Control Lists vs. Security Groups


can block specific IP address


Ephemeral Port


Exam Tips - Network ACLs


- Your VPC automatically comes a default network ACL, and by default it allows all outbound and inbound traffic

- You can create custom network ACLs. By default, each custom network ACL denies all inbound and outbound traffic until you add rules

- Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL.

- You can associate a network ACL with multiple subnets; however, a subnet can be associated with only one network ACL at a time. When you associate a network ACL with a subnet, the previous association is removed

- Network ACLs contain a numbered list of rules that is evaluated in order, starting with the lowest numbered rule.

- Network ACLs have separate inbound and outbound rules, and each rule can either allow or deny traffic

- Network ACLs are stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).

- Block IP addresses using network ACLs not security Groups


========================================


Custom VPC's and ELB


=========================================


VPC Flow Logs



VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.


Flow logs can be created at 3 levels

- VPC

- Subnet

- Network Interface Level





Create Flow Log 


Create Log Group in CloudWatch - Create Flow log


VPC Flow Logs Exam Tips


- You cannot enable flow logs for VPCs that are peered with your VPC unless the peer VPC is in your account

- You cannot tag a flow log

- After You've created a flow log, you cannot change its configuration; for example, you can't associate a different IAM role with the flow log.


Not all IP Traffic is monitored


- Traffic generated by instances then they contact the Amazon DNS server. If you use your own DNS server, then all traffic to that DNS server is logged

- Traffic generated by a Windows instance for Amazon Windows license activation

- Traffic to and from 169.254.169.254 for instance metadata

- DHCP traffic

- Traffic to the reserved IP address for the default VPC router.


=================================================


NAT vs. Bastion


Exam Tips - NAT vs Bastions


- A NAT is used to provide internet traffic to EC2 instances in private subnets

- A Bastion is used to securely administer EC2 instances (using SSH or RDP) in private subnets. In Australia we call them jump boxes.


==================================================


VPC End Points


Create Endpoint 



===================================================


VPC Clean up



===================================================


VPC Summary


NAT instances


- When creating a NAT instance, Disable Source/Destination Check on the Instance.

- NAT instances must be in a public subnet

- There must be a route out of the private subnet to the NAT instance, in order for this to work.

- The amount of traffic that NAT instances can support depends on the instance size. If you are bottlenecking, increase the instance size.

- You can create high availability using Autoscaling Groups, multiple subnets in different AZs, and a script to automate failover.

- Behind a security group



NAT Gateways


- Preferred by the enterprise

- Scale automatically up to 10Gbps

- No need to patch

- Not associated with security groups

- Automatically assigned a public ip address

- Remember to update your route tables

- No need to disable Source/Destination Checks

- More secure than a NAT instance



Network ACLs


- Your VPC automatically comes a default network ACL, and by default it allows all outbound and inbound traffic.

- You can create custom network ACLs. By default, each custom network ACL denies all inbound and outbound traffic until you add rules.

- Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL.

- You can associate a network ACL with multiple subnets; however, a subnet can be associated with only one network ACL at a time. When you associate a network ACL with a subnet, the previous association is removed

- Network ACLs contain a numbered list of rules that is evaluated in order, starting with the lowest numbered rule.

- Network ACLs have separate inbound and outbound rules, and each rule can either allow or deny traffic

- Network ACLs are stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa.)

- Block IP Addresses using network ACLs not Security Groups



ALB's


- You will need at least 2 public subnets in order to deploy an application load balancer



VPC Flow Logs Exam Tips


- You cannot enable flow logs for VPCs that are peered with your VPC unless the peer VPC is in your account

- You cannot tag a flow log.

- After you've created a flow log, you cannot change its configuration; for example, you can't associate a different IAM role with the flow log.



Not all IP Traffic is monitored;


- Traffic generated by instances when they contact the Amazon DNS server. If you use your own DNS server, then all traffic to that DNS server is logged.

- Traffic generated by a Windows instance for Amazon Windows license activation

- Traffic to and from 169.254.169.254 for instance metadata

- DHCP traffic

- Traffic to the reserved IP address for the default VPC router.



=================================



VPC Quiz


- VPC stands for Virtual Private Cloud : True

- Security groups act like a firewall at the instance level whereas ______ are an additional layer of security that act at the subnet level.

  : Network ACL's

- Select the incorrect statement

  1. In Amazon VPC, an instance retains its private IP

  2. It is possible to have private subnets in VPC

  3. A subnet can be associated with multiple Access Control Lists

  4. You may only have 1 internet gateway per VPC

==> Answer is 3

- How many VPC's am I allowed in each AWS Region by default?  : 5

- How many internet gateways can I attach to my custom VPC?  : 1

반응형
이전 1 2 3 다음