반응형
블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

카테고리


반응형


AWS Certified developer associate exam samples 



구글링 해서 얻은 샘플 시험문제입니다.

혹시 도움이 되시면 참고 하세요.

그리고 이것 말고 다른 자료 있으면 공유 부탁 드립니다. ( solkit70@gmail.com )




https://blog.cloudthat.com/sample-questions-for-amazon-web-services-certified-developer-associate-certification/

 

AWS Fundamentals

1. What is a worker with respect to SWF?

a. Workers are programs that interact with Amazon SWF to get tasks, process the received task, and return the results
b. Workers are ec2 instances which can create s3 buckets and process SQS messages
c. Workers are the people in the warehouse pocessing orders for amazon
d. Workers are the component of IIS which run on windows platform under the w3wp.exe process

2. Which of the below statements about DynamoDB are true? (Select any 2)

a. DynamoDB uses a Transaction-Level Read Consistency
b. DynamoDB uses optimistic concurrency control
c. DynamoDB uses conditional writes for consistency
d. DynamoDB restricts an item access during reads
e. DynamoDB restricts item access during writes

Designing and Developing

1. A Security system monitors 600 cameras, saving image metadata every 1 minute to an Amazon DynamoDB table. Each sample involves 1 kb of data, and the data writes are evenly distributed over time.

How much write throughput is required for the target table?

a. 6000
b. 10
c. 3600
d. 60
e. 600

2. Which read request in DynamoDB returns a response with the most up-to-date data, reflecting the updates from all prior write operations that were successful?

a. Eventual Consistent Reads
b. Conditional reads for consistency
c. Strongly Consistent Reads
d. Not possible

3. You run a Query operation which returned all the data attributes for the selected items. You are only interested in seeing a few attributes. How do you achieve this in DynamoDB?

a. This is not possible
b. Use ProjectExpression
c. Use ExpressionAttribute
d. Use ProjectionExpression

Deployment and Security

1.     AWS Elastic Beanstalk currently supports which of the following platforms? (select any 2)

a. Java with Apache
b. IBM with Websphere
c. .Net
d. Perl

 2. Which of the following features allow organizations to leverage a commercial federation server as an identity bridge, providing secure single sign-on into the AWS console without storing user keys and without additional passwords or sign-on?

a. Web Identification Services
b. Web Identity Federation
c. Active Directory Authentication Services
d. SAML federation

3. Your web service is burning expensive CPU cycles by constantly polling SQS queues for messages. How can you avoid this?

a. Use Elasticache to cache the messages, rather than SQS.
b. Enable SQS Long Polling
c. Modify web service code to only poll a few minutes
d. SQS automatically pushes messages to the web service, so this should not be a problem

Debugging

1.     The output named BackupLoadBalancerDNSName returns the DNS name of the resource with the logical ID of BackupLoadBalancer.

Which of the following represents a valid AWS CloudFormation Template?

a. “Outputs” : {
“BackupLoadBalancerDNSName” : {
“Description”: “The DNSName of the backup load balancer”,
“Value”: { “Ls::GetAtt” : [ “BackupLoadBalancer”, “DNS” ]},
}

b. “Outputs” : {
“BackupLoadBalancerDNSName” : {
“Description”: “The DNSName of the backup load balancer”,
“Value”: { “Fn::GetAtt” : [ “BackupLoadBalancer”, “DNSName” ]},
}

c. “Outputs” : {
“BackupLoadBalancerDNSName” : {
“Description”: “The DNSName of the backup load balancer”,
“Value”: { “Fn::PostAtt” : [ “BackupLoadBalancer”, “Name” ]},
}

d. “Outputs” : {
“BackupLoadBalancerDNSName” : {
“Description”: “The DNSName of the backup load balancer”,
“Value”: { “Fn::GetAtt” : [ “BackupLoadBalancer”,  ]},
}

2. According to below IAM Policy which is the most appropriate possibility?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

{

"Version": "2012-10-17",

 

"Statement": [

 

{

 

"Sid": "Stmt1459162621000",

 

"Effect": "Allow",

 

"Action": ["sns:CreateTopic", "sns:Subscribe","sns:DeleteTopic"],

 

"Resource": [ "*" ]

 

},

 

{

 

"Effect": "Deny",

 

"Action": [ "sns:DeleteTopic"],

 

"Resource": [ "*" ]

 

}

 

]

 

}

a. User can perform CreateTopic,Subscribe and DeleteTopic
b. User  is denied  to perform only DeleteTopic

c. User can perform CreateTopic and Subscribe but denied to perform DeleteTopic operation
d. The above policy is invalid

Answers:

AWS Fundamentals

1.     a

2.     b,c

Designing and Developing

1.     b

2.     c

3.     d

Deployment and Security

1.     a,c

2.     d

3.     b

Debugging

1.      b

2.      c

 

https://blog.cloudthat.com/preparing-for-aws-certified-developer-certification-exam/

 

http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

 

http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/QueryAndScanGuidelines.html

 

http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-sticky-sessions.html

 




Question NO 18

A startup s photo-sharing site is deployed in a VPC. An ELB distributes web traffic across two subnets. ELB session stickiness is configured to use the AWS-generated session cookie, with a session TTL of 5 minutes. The web server Auto Scaling Group is configured as: min-size=4, max-size=4. The startups preparing for a public launch, by running load-testing software installed on a single EC2 instance running in us-west-2a. After 60 minutes of load-testing, the web server logs show: Which recommendations fan help ensure load-testing HTTP requests are evenly distributed across the four web servers? Choose 2 answers.

|Options

A. Launch and run the load-tester EC2 instance from us-east-1 instead.
B. Re-configure the load-testing software to re-resolve DNS for each web request.
C. Use a 3rd-party load-testing service which offers globally-distributed test clients.
D. Configure ELB and Autoscaling to distribute across us-west-2a and us-west-2f.
E. Configure ELB session stickiness to use the app-specific session cookie.

 

 

Answer: B,E 

 

 

Which statements about DynamoDBare true? Choose 2 answers

A. DynamoDBuses optimistic concurrency control
B. DynamoDBuses a pessimistic locking model
C. DynamoDBrestricts item access during reads
D. DynamoDBrestricts item access during writes
E. DynamoDBuses conditional writes for consistency


Answer: A,E

 

 

AWS-Certified-Developer-Associate Exam Dumps Detail: AWS-Certified-Developer-Associate Real Questions

NO.1 EBS Snapshots occur _____

A.  Synchronously
B.  Asynchronously
C.  Weekly

Answer: B

 

 

While creating the snapshots using the API, which Action should I be using?


A.  DeploySnapshot

B.  CreateSnapshot

C.  MakeSnapShot

D.  Fresh Snapshot

 

Answer: B

 

 

https://tutorialsnation.com/aws-certification-dumps

 

255 Questions




 

http://m8010-241-dumps-pdf.blogspot.com/2016/03/amazon-aws-certified-developer.html

 

NO.1 Which features can be used to restrict access to data in S3? Choose 2 answers
A. Set an S3 ACL on the bucket or the object.
B. Set an S3 Bucket policy.
C. Use S3 Virtual Hosting
D. Create a CloudFront distribution for the bucket
E. Enable IAM Identity Federation.
Answer: A,B

AWS-Certified-Developer-Associate demo  

NO.2 Which of the following services are included at no additional cost with the use of the AWS
platform? Choose 2 answers
A. Auto Scaling
B. Elastic Load Balancing
C. Simple Workflow Service
D. CloudFormation
E. Elastic Compute Cloud
F. Simple Storage Service
Answer: A,D

AWS-Certified-Developer-Associate Practice Test  

NO.3 What is one key difference between an Amazon EBS-backed and an instance-store backed
instance?
A. Virtual Private Cloud requires EBS backed instances
B. Instance-store backed instances can be stopped and restarted.
C. Auto scaling requires using Amazon EBS-backed instances.
D. Amazon EBS-backed instances can be stopped and restarted
Answer: D

NO.4 What item operation allows the retrieval of multiple items from a DynamoDB table in a single
API call?
A. BatchGetItem
B. GetItemRange
C. GetMultipleItems
D. GetItem
Answer: A

AWS-Certified-Developer-Associate study guide   
AWS-Certified-Developer-Associate test answers  

NO.5 How can software determine the public and private IP addresses of the Amazon EC2 instance
that it is running on?
A. Query the local instance metadata.
B. Use ipconfig or ifconfig command.
C. Query the local instance userdata.
D. Query the appropriate Amazon CloudWatch metric.
Answer: A

AWS-Certified-Developer-Associate dumps torrent  

NO.6 What is the maximum number of S3 Buckets available per AWS account?
A. 500 per account
B. there is no limit
C. 100 per account
D. 100 per IAM user
E. 100 per region
Answer: E

 

 

http://www.certificationking.com/download/Amazon-AWS.htm

 

 

AWS_certified_developer_associate_examsample

 

Which of the following statements about SQS is true?

A. Messages will be delivered exactly once and messages will be delivered in First in, First out order

B. Messages will be delivered exactly once and message delivery order is indeterminate

C. Messages will be delivered one or more times and messages will be delivered in First in, First out order

D. Messages will be delivered one or more times and message delivery order is indeterminate

EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI:

A. can be used to launch EC2 instances in any AWS region

B. can only be used to launch EC2 instances in the same country as the AMI is stored

C. can only be used to launch EC2 instances in the same AWS region as the AMI is stored

D. can only be used to launch EC2 instances in the same AWS availability zone as the AMI is stored

Company B provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end- to-end throughput as high as possible. However, Company B is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. How can Company B reduce the number of empty responses?

A. Set the imaging queue VisibilityTimeout attribute to 20 seconds

B. Set the imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 seconds

C. Set the imaging queue MessageRetentionPeriod attribute to 20 seconds

D. Set the DelaySeconds parameter of a message to 20 seconds

You attempt to store an object in the US-STANDARD region in Amazon S3, and receive a confirmation that it has been successfully stored. You then immediately make another API call and attempt to read this object. S3 tells you that the object does not exist. What could explain this behavior?

A. US-STANDARD uses eventual consistency and it can take time for an object to be readable in a bucket.

B. Objects in Amazon S3 do not become visible until they are replicated to a second region.

C. US-STANDARD imposes a 1 second delay before new objects are readable

D. You exceeded the bucket object limit, and once this limit is raised the object will be visible.

You have reached your account limit for the number of CloudFormation stacks in a region. How do you increase your limit?

A. Make an API call

B. Contact AWS

C. Use the console

D. You cannot increase your limit

Which statements about DynamoDB are true? (Pick 2 correct answers)

A. DynamoDB uses a pessimistic locking model

B. DynamoDB uses optimistic concurrency control

C. DynamoDB uses conditional writes for consistency

D. DynamoDB restricts item access during reads

E.  DynamoDB restricts item access during writes

 

 

 

 

 

1) Your CloudFormation template launches a two-tier web application in us-east-1. When you attempt to create a development stack in us-west-1, the process fails.

What could be the problem?

A) The AMIs referenced in the template are not available in us-west-1.

B) The IAM roles referenced in the template are not valid in us-west-1.

C) Two ELB Classic Load Balancers cannot have the same Name tag.

D) CloudFormation templates can be launched only in a single region.

 

 

2) Your application reads commands from an SQS queue and sends them to web services hosted by your partners. When a partner's endpoint goes down, your application continually returns their commands to the queue. The repeated attempts to deliver these commands use up resources. Commands that can’t be delivered must not be lost.

How can you accommodate the partners' broken web services without wasting your resources?

A) Create a delay queue and set DelaySeconds to 30 seconds.

B) Requeue the message with a VisibilityTimeout of 30 seconds.

C) Create a dead letter queue and set the Maximum Receives to 3.

D) Requeue the message with a DelaySeconds of 30 seconds.

 

 

3) Your application must write to an SQS queue. Your corporate security policies require that AWS credentials are always encrypted and are rotated at least once a week.

How can you securely provide credentials that allow your application to write to the queue?

A) Have the application fetch an access key from an Amazon S3 bucket at run time.

B) Launch the application's Amazon EC2 instance with an IAM role.

C) Encrypt an access key in the application source code.

D) Enroll the instance in an Active Directory domain and use AD authentication.

 

 

4) Which operation could return temporarily inconsistent results?

A) Getting an object from Amazon S3 after it was initially created

B) Selecting a row from an Amazon RDS database after it was inserted

C) Selecting a row from an Amazon RDS database after it was deleted

D) Getting an object from Amazon S3 after it was deleted

 

 

5) You are creating a DynamoDB table with the following attributes:

 PurchaseOrderNumber (partition key)  CustomerID

 PurchaseDate

 TotalPurchaseValue

 

One of your applications must retrieve items from the table to calculate the total value of purchases for a particular customer over a date range.

 

What secondary index do you need to add to the table?

A) Local secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute

B) Local secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute

C) Global secondary index with a partition key of CustomerID and sort key of PurchaseDate; project the TotalPurchaseValue attribute

D) Global secondary index with a partition key of PurchaseDate and sort key of CustomerID; project the TotalPurchaseValue attribute

 

 

6) Your CloudFormation template has the following Mappings section:

"Mappings" : {

  "RegionMap" : {

    "us-east-1"

    "us-west-1"

: { "32" : "ami-6411e20d"},

: { "32" : "ami-c9c7978c"}

} }

 

 

Which JSON snippet will result in the value "ami-6411e20d" when a stack is launched in us-east-1?

A) { "Fn::FindInMap" : [ "Mappings", { "RegionMap" : ["us-east-1", "us-west-1"] }, "32"]}

B) { "Fn::FindInMap" : [ "Mappings", { "Ref" : "AWS::Region" }, "32"]}

C) { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "32"]}

D) { "Fn::FindInMap" : [ "RegionMap", { "RegionMap" : "AWS::Region" }, "32"]}

 

 

7) Your web application reads an item from your DynamoDB table, changes an attribute, and then writes the item back to the table. You need to ensure that one process doesn't overwrite a simultaneous change from another process.

How can you ensure concurrency?

A) Implement optimistic concurrency by using a conditional write.

B) Implement pessimistic concurrency by using a conditional write.

C) Implement optimistic concurrency by locking the item upon read.

D) Implement pessimistic concurrency by locking the item upon read.

 

 

8) Your application triggers events that must be delivered to all your partners. The exact partner list is constantly changing: some partners run a highly available endpoint, and other partners’ endpoints are online only a few hours each night. Your application is mission-critical, and communication with your partners must not introduce delay in its operation. A delay in delivering the event to one partner cannot delay delivery to other partners.

What is an appropriate way to code this?

A) Implement an Amazon SWF task to deliver the message to each partner. Initiate an Amazon SWF workflow execution.

B) Send the event as an Amazon SNS message. Instruct your partners to create an HTTP. Subscribe their HTTP endpoint to the Amazon SNS topic.

C) Create one SQS queue per partner. Iterate through the queues and write the event to each one. Partners retrieve messages from their queue.

D) Send the event as an Amazon SNS message. Create one SQS queue per partner that subscribes to the Amazon SNS topic. Partners retrieve messages from their queue.

 

 

9) You have reached your account limit for the number of CloudFormation stacks in a region.

How do you increase your limit?

A) Use the AWS Command Line Interface.

B) Send an email to limits@amazon.com with the subject “CloudFormation.”

C) Use the Support Center in the AWS Management Console.

D) All service limits are fixed and cannot be increased.

 

 

10) You have a three-tier web application (web, app, and data) in a single Amazon VPC. The web and app tiers each span two Availability Zones, are in separate subnets, and sit behind ELB Classic Load Balancers. The data tier is a Multi-AZ Amazon RDS MySQL database instance in database subnets. When you call the database tier from your app tier instances, you receive a timeout error.

What could be causing this?

A) The IAM role associated with the app tier instances does not have rights to the MySQL database.

B) The security group for the Amazon RDS instance does not allow traffic on port 3306 from the app

instances.

C) The Amazon RDS database instance does not have a public IP address.

D) There is no route defined between the app tier and the database tier in the Amazon VPC.

 



 

Answers

 

1) A – AMIs are stored in a region and cannot be accessed in other regions. To use the AMI in another region, you must copy it to that region. IAM roles are valid across the entire account.

 

2) C – After a message is taken from the queue and returned for the maximum number of retries, it is automatically sent to a dead letter queue, if one has been configured. It stays there until you retrieve it for forensic purposes.

 

3) B – IAM roles are based on temporary security tokens, so they are rotated automatically. Keys in the source code cannot be rotated (and are a very bad idea). It’s impossible to retrieve credentials from an S3 bucket if you don’t already have credentials for that bucket. Active Directory authorization will not grant access to AWS resources.

 

4) D – S3 has eventual consistency for overwrite PUTS and DELETES.

 

5) C – The query is for a particular CustomerID, so a Global Secondary Index is needed for a different partition key. To retrieve only the desired date range, the PurchaseDate must be the sort key. Projecting the TotalPurchaseValue into the index provides all the data needed to satisfy the use case.

 

6) C – Learn how to create and reference mappings here.

 

7) A – Optimistic concurrency depends on checking a value upon save to ensure that it has not changed. Pessimistic concurrency prevents a value from changing by locking the item or row in the database. DynamoDB does not support item locking, and conditional writes are perfect for implementing optimistic concurrency.

 

8) D – There are two challenges here: the command must be “fanned out” to a variable pool of partners, and your app must be decoupled from the partners because they are not highly available. Sending the command as an SNS message achieves the fan-out via its publication/subscribe model, and using an SQS queue for each partner decouples your app from the partners. Writing the message to each queue directly would cause more latency for your app and would require your app to monitor which partners were active. It would be difficult to write an Amazon SWF workflow for a rapidly changing set of partners.

 

9) C – The Support Center in the AWS Management Console allows customers to request limit increases by creating a case.

10) B – Security groups block all network traffic by default, so if a group is not correctly configured, it can lead to a timeout error. MySQL security, not IAM, controls MySQL security. All subnets in an Amazon VPC have routes to all other subnets. Internal traffic within an Amazon VPC does not require public IP addresses.

 

10) B – Security groups block all network traffic by default, so if a group is not correctly configured, it can lead to a timeout error. MySQL security, not IAM, controls MySQL security. All subnets in an Amazon VPC have routes to all other subnets. Internal traffic within an Amazon VPC does not require public IP addresses.

 

http://free-braindumps.com/amazon/free-aws-certified-developer-associate-braindumps.html?p=2

 


 

반응형

[AWS Certificate] Developer - VPC memo

2017. 11. 29. 10:56 | Posted by 솔웅


반응형


VPC (*****) Overview (Architect, Developer and Sysop)



Think of a VPC as a virtual data center in the cloud.


Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the Amazon Web Services (AWS) Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.


You can easily customize the network configuration for your Amazon Virtual Private Cloud. For example, you can create a public-facing subnet for your webservers that has access to the Internet, and place your backend systems such as databases or application servers in a private-facing subnet with no Internet access. You can leverage multiple layers of security, including security groups and network access control lists, to help control access to Amazon EC2 instances in each subnet.


Additionally, you can create a Hardware Virtual Private Network (VPN) connection between your corporate datacenter and your VPC and leverage the AWS cloud as an extension of your corporate datacenter.





What can you do with a VPC?


- Launch instances into a subnet of your choosing

- Assign custom IP address ranges in each subnet

- Configure route tables between subnets

- Create internet gateway and attach it to our VPC

- Much better security control over your AWS resources

- Instance security groups

- Subnet network access control list (ACLS)



Default VPC vs. Custom VPC


- Default VPC is user friendly, allowing you to immediately deploy instances.

- All Subnets in default VPC have a route out to the internet

- Each EC2 instance has both a public and private IP address



VPC Peering

- Allows you to connect one VPC with another via a direct network route using private IP addresses

- Instances behave as if they were on the same private network

- You can peer VPC's with other AWS accounts as well as with other VPCs in the same account.

- Peering is in a star configuration : i.e. 1 central VPC peers with 4 others. NO TRANSITIVE PEERING!!!




Exam Tips


- Think of a VPC as a logical datacenter in AWS.

- Consistes of IGWs (or Virtual Private Gateways), Route Tables, Network Access Control Lists, Subnets, and Security Groups

- 1 Subnet = 1 Availability Zone

- Security Groups are Stateful; Network Access Control Lists are Stateless

- NO TRANSITIVE PEERING


===================================


* Create VPC





Automatically created Route Tables, Network ACLs and Security Groups


Create 1st Subnet - 10.0.2.0-us-east-1a


VPCs and Subnet  - http://docs.aws.amazon.com/ko_kr/AmazonVPC/latest/UserGuide/VPC_Subnets.html 


Create 2nd Subnet - 10.0.2.0-us-east-1b



* Internet Gateway

Create Internet Gateway - Attach the VPC

1 VPC can be assigned to 1 Internet Gateway (*****)



* Route Table

Create new route table with the VPC

-> Navigate to Routes tab in Route Table -> Edit -> Add another route 0.0.0.0/0 - Target = above internet gateway -> Save

Add another route ::/0 - Target = above gateway - Save


-> Navigate to Subnet Associations tab -> Edit -> select first one as main


Go to Subnets - Set Auto-assign Public IP to Yes for first one

-> Subnet Actions -> Modify auto-assign IP settings -> Check Enable auto-assign public IPv4 address



* Create New EC2 Instance


Select the VPC for Network, Select Subnet (first one), 


Create 2nd EC2 instance - Select the VPC for Network, Select Subnet (2nd one), 


1st Instance has public IP address

2nd Instance has no public IP address


* Open a Terminal


1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@34.228.40.70 -i EC2KeyPair.pem.txt 

The authenticity of host '34.228.40.70 (34.228.40.70)' can't be established.

ECDSA key fingerprint is SHA256:CNhUvY2BVwpZrGXQOE/SWocZS17IKYP8xKWKApE6P9c.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '34.228.40.70' (ECDSA) to the list of known hosts.


       __|  __|_  )

       _|  (     /   Amazon Linux AMI

      ___|\___|___|


https://aws.amazon.com/amazon-linux-ami/2017.09-release-notes/

[ec2-user@ip-10-0-1-232 ~]$ sudo su

[root@ip-10-0-1-232 ec2-user]# yum update -y





=========================================================


Network Address Translation (NAT)



NAT Instances & NAT Gateways



http://docs.aws.amazon.com/ko_kr/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html



Exam Tips - NAT instances


- When creating a NAT instance, Disable Source/Destination Check on the Instance

- NAT instances must be in a public subnet

- There must be a route out of the private subnet to the NAT instance, in order for this to work.

- The amount of traffic that NAT instances can support depends on the instance size. If you are bottlenecking, increase the instance size.

- You can create high availability using Autoscaling Groups, multiple subnets in different AZs, and a script to automate failover

- Behind a security group





Exam Tips - NAT Gateways


- Preferred by the enterprise

- Scale automatically up to 10Gbps

- No need to patch

- Not associated with security groups

- Automatically assigned a public ip address

- Remember to update your route tables

- No need to disable Source/Destination Checks

- More secure than a NAT instance




=========================================


Network Access Control Lists vs. Security Groups


can block specific IP address


Ephemeral Port


Exam Tips - Network ACLs


- Your VPC automatically comes a default network ACL, and by default it allows all outbound and inbound traffic

- You can create custom network ACLs. By default, each custom network ACL denies all inbound and outbound traffic until you add rules

- Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL.

- You can associate a network ACL with multiple subnets; however, a subnet can be associated with only one network ACL at a time. When you associate a network ACL with a subnet, the previous association is removed

- Network ACLs contain a numbered list of rules that is evaluated in order, starting with the lowest numbered rule.

- Network ACLs have separate inbound and outbound rules, and each rule can either allow or deny traffic

- Network ACLs are stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).

- Block IP addresses using network ACLs not security Groups


========================================


Custom VPC's and ELB


=========================================


VPC Flow Logs



VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.


Flow logs can be created at 3 levels

- VPC

- Subnet

- Network Interface Level





Create Flow Log 


Create Log Group in CloudWatch - Create Flow log


VPC Flow Logs Exam Tips


- You cannot enable flow logs for VPCs that are peered with your VPC unless the peer VPC is in your account

- You cannot tag a flow log

- After You've created a flow log, you cannot change its configuration; for example, you can't associate a different IAM role with the flow log.


Not all IP Traffic is monitored


- Traffic generated by instances then they contact the Amazon DNS server. If you use your own DNS server, then all traffic to that DNS server is logged

- Traffic generated by a Windows instance for Amazon Windows license activation

- Traffic to and from 169.254.169.254 for instance metadata

- DHCP traffic

- Traffic to the reserved IP address for the default VPC router.


=================================================


NAT vs. Bastion


Exam Tips - NAT vs Bastions


- A NAT is used to provide internet traffic to EC2 instances in private subnets

- A Bastion is used to securely administer EC2 instances (using SSH or RDP) in private subnets. In Australia we call them jump boxes.


==================================================


VPC End Points


Create Endpoint 



===================================================


VPC Clean up



===================================================


VPC Summary


NAT instances


- When creating a NAT instance, Disable Source/Destination Check on the Instance.

- NAT instances must be in a public subnet

- There must be a route out of the private subnet to the NAT instance, in order for this to work.

- The amount of traffic that NAT instances can support depends on the instance size. If you are bottlenecking, increase the instance size.

- You can create high availability using Autoscaling Groups, multiple subnets in different AZs, and a script to automate failover.

- Behind a security group



NAT Gateways


- Preferred by the enterprise

- Scale automatically up to 10Gbps

- No need to patch

- Not associated with security groups

- Automatically assigned a public ip address

- Remember to update your route tables

- No need to disable Source/Destination Checks

- More secure than a NAT instance



Network ACLs


- Your VPC automatically comes a default network ACL, and by default it allows all outbound and inbound traffic.

- You can create custom network ACLs. By default, each custom network ACL denies all inbound and outbound traffic until you add rules.

- Each subnet in your VPC must be associated with a network ACL. If you don't explicitly associate a subnet with a network ACL, the subnet is automatically associated with the default network ACL.

- You can associate a network ACL with multiple subnets; however, a subnet can be associated with only one network ACL at a time. When you associate a network ACL with a subnet, the previous association is removed

- Network ACLs contain a numbered list of rules that is evaluated in order, starting with the lowest numbered rule.

- Network ACLs have separate inbound and outbound rules, and each rule can either allow or deny traffic

- Network ACLs are stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa.)

- Block IP Addresses using network ACLs not Security Groups



ALB's


- You will need at least 2 public subnets in order to deploy an application load balancer



VPC Flow Logs Exam Tips


- You cannot enable flow logs for VPCs that are peered with your VPC unless the peer VPC is in your account

- You cannot tag a flow log.

- After you've created a flow log, you cannot change its configuration; for example, you can't associate a different IAM role with the flow log.



Not all IP Traffic is monitored;


- Traffic generated by instances when they contact the Amazon DNS server. If you use your own DNS server, then all traffic to that DNS server is logged.

- Traffic generated by a Windows instance for Amazon Windows license activation

- Traffic to and from 169.254.169.254 for instance metadata

- DHCP traffic

- Traffic to the reserved IP address for the default VPC router.



=================================



VPC Quiz


- VPC stands for Virtual Private Cloud : True

- Security groups act like a firewall at the instance level whereas ______ are an additional layer of security that act at the subnet level.

  : Network ACL's

- Select the incorrect statement

  1. In Amazon VPC, an instance retains its private IP

  2. It is possible to have private subnets in VPC

  3. A subnet can be associated with multiple Access Control Lists

  4. You may only have 1 internet gateway per VPC

==> Answer is 3

- How many VPC's am I allowed in each AWS Region by default?  : 5

- How many internet gateways can I attach to my custom VPC?  : 1

반응형

[AWS Certificate] Developer - Route53 memo

2017. 11. 25. 08:37 | Posted by 솔웅


반응형

Route53 & DNS



What is DNS?


If you've used the internet, you've used DNS. DNS is used to convert human friendly domain names (such as http://acloud.guru) into an Internet Protocol (IP) address (such as http://82.124.53.1).


IP addresses are used by computers to identify each other on the network. IP addresses commonly come in 2 different forms, IPv4 and IPv6.



IPv4 vs. IPv6


The IPv4 space is a 32 bit field and has over 4 billion different addresses (4,294,967,296 to be precise).


IPv6 was created to solve this depletion issue and has an address space of 128 bits which in theory is

340,282,366,920,938,463,463,374,607,431,768,211,456 addresses or 340 undecillion addresses





Top Level Domains


If we look at common domain names such as google.com, bbc.co.uk. acloud,guru etc. you will notice a string of characters separated by dots (periods). The last word in a domain name represents the "top level domain". The second word in a domain name is known as a second level domain name (this is optional though and depends on the domain name).

.com, .edu, .gov, .co.uk, .gov.uk, .com.au


These top level domain names are controlled by the Internet Assigned Numbers Authority (IANA) in a root zone database which is essentially a database of all available top level domains. You can view this database by visiting

http://www.iana.org/domains/root/db







Domain Registrars



Because all of the names in a given domain name have to be unique there needs to be a way to organize this all so that domain names aren't duplicated. This is where domain registrars come in. A registrar is an authority that can assign domain names directly under one or more top-level domains. These domains are registered with InterNIC, a service of ICANN, which enforces uniqueness of domain names across the Internet. Each domain name becomes registered in a central database known as the WhoIS database.


Popular domain registrars include GoDaddy.com, 123-reg.co.uk etc.



SOA Records


The SOA record stores information about


- The name of the server that supplied the data for the zone.

- The administrator of the zone.

- The current version of the data file.

- The number of seconds a secondary name server should wait before checking for updates

- The number of seconds a secondary name server should wait before retrying a failed zone transfer

- The maximum number of seconds that a secondary name server can use data before it must either be refreshed or expire.

- The default number of seconds for the time-to-live file on resource records.



NS Records


NS stands for Name Server records and are used by Top Level Domain servers to direct traffic to the Content DNS server which contains the authoritative DNS records.



A Records


An 'A' record is the fundamental type of DNS record and the 'A' in A record stands for 'Address'. The A record is used by a computer to translate the name of the domain to the IP address. For example http://www.acloud.guru might point to http://123.10.10.80



TTL


The length that a DNS record is cached on either the Resolving Server or the users own local PC is equal to the value of the "Time To Live" (TTL) in seconds. The lower the time to live, the faster changes to DNS records take to propagate throughout the internet.



CNAMES


A Canonical Name (CName) can be used to resolve one domain name to another. For example, you may have a mobile website with the domain name http://m.acloud.guru that is used for when users browse to your domain name on their mobile devices. You may also want the name http://mobile.acloud.guru to resolve to this same address.



Alias Records


Alias records are used to map resource record sets in your hosted zone to Elastic Load Balancers, CloudFront distributions, or S3 buckets that are configured as websites.


Alias records work like a CNAME record in that you can map one DNS name (www.example.com) to another 'target' DNS name (elb1234.elb.amazonaws.com).


Key difference - A CNAME can't be used for naked domain names (zone apex record). You can't have a CNAME for http://acloud.guru, it must be either an A record or an Alias.


Alias resource record sets can save you time because Amazon Route 53 automatically recognizes changes in the record sets that the alias resource record set refers to.


For example, suppose an alias resource record set for example.com points to an ELB load balancer at lb1-1234.us-east-1.elb.amazonaws.com. If the IP address of the load balancer changes, Amazon Route 53 will automatically reflect those changes in DNS answers for example.com without any changes to the hosted zone that contains resource record sets for example.com.





Exam Tips


- ELB's do not have pre-defined IPv4 addresses, you resolve to them using a DNS name.

- Understand the difference between an Alias Record and a CNAME.

- Given the choice, always choose an Alias Record over a CNAME


==================================


Route 53 - Register A Domain Name 


AWS Console - Networking - Route 53 - Registered Domains - Register New Domain - 



=====================================


Set up EC2 Instances


Set up 2 Instances - create html files

Set up LoadBalancer - DNS name -> will display html file in 1 of 2 Instances


Change Region

Setup an Instance - create html files - Create new security group - Create new key - Launch

Create new Region ELB


DNS name - display html file in new Region instance


=================================


Simple Routing Policy Lab


- Simple

This is the default routing policy when you create a new record set. This is most commonly used when you have a single resource that performs a given function for your domain, for example, one web server that serves content for the http://acloud.guru website.





AWS Console - Route53 - Create Hosted Zone - click on DNS link - Create Record Set

-> Alias Target - ELB


=========================


- Weighted Routing Policy


Weighted Routing Policies let you split your traffic based on different weights assigned. For example you can set 10% of your traffic to go to US-EAST-1 and 90% to go to EU-WEST-1.


AWS Console - Route 53 - Create Record Set - Alias - Select ELB - Routing Policy : Weighted - Enter Weight (90%) and Set ID - Click on Create Button


Create Record Set - Select other ELB - Enter Weight (10%)





==========================


Latency Routing Policy


Latency based routing allows you to route your traffic based on the lowest network latency for your end user (i.e. which region will give them the fastest response time).


To use latency-based routing you create a latency resource record set for the Amazon EC2 (or ELB) resource in each region that hosts your website. When Amazon Route 53 receives a query for your site, it selects the latency resource record set for the region that gives the user the lowest latency. Route 53 then responds with the value associated with that resource record set.


AWS Console - Route 53 - Create Record Set - Alias Target (ELB) - Routing Policy (Latency) - Set ID - Select Region 1


AWS Console - Route 53 - Create Record Set - Alias Target (ELB) - Routing Policy (Latency) - Set ID - Select Region 2




==========================


Failover Routing Policy



Failover routing policies are used when you want to create an active/passive set up. For example you may want your primary site to be in EU-WEST-2 and your secondary DR Site in AP-SOUTHEAST-2.


Route 53 will monitor the health of your primary site using a health check.


A health check monitors the health of your end points.


AWS Console - ELB : Copy DNS name - Route 53 - Health check - Name 1, Domain Name, enter advanced configuration - Create health check


AWS Console - ELB : Copy DNS name - Route 53 - Health check - Name 2, Domain Name, enter advanced configuration - Set Alarm : Set SNS Topic - Create health check


AWS Console - Route 53 - Create Record Set - Alias Target (ELB) - Routing Policy : Failover, Set Primary or Secondary, Set Associate with Health Check 


AWS Console - Route 53 - Create Record Set - Alias Target (ELB) - Routing Policy : Failover, Set Primary or Secondary



==========================



Geolocation Routing Policy



Geolocation routing lets you choose where your traffic will be sent based on the geographic location of our users (i.e. the location from which DNS queries originate). For example, you might want all queries from Europe to be routed to a fleet of EC2 instances that are specifically configured for your European customers. These servers may have the local language of your European customers and all prices are displayed in Euros.


AWS Console - Route 53 - Create Record Set - Alias (ELB) - Routing Policy : Geolocation - US or Europe etc. , Set ID


AWS Console - Route 53 - Create Record Set - Alias (ELB) - Routing Policy : Geolocation - US or Europe etc. , Set ID




===========================


DNS Summary


DNS Exam Tips


Delete all Load balancers. It is paid service.


ELB has no IP address - only DNS name


- ELB's do not have pre-defined IPv4 addresses, you resolve to them using a DNS name.

- Understand the difference between an Alias Record and a CNAME.

- Given the choice, always choose an Alias Record over a CNAME.

- Remember the different routing policies and their use cases.

: Simple

: Wighted

: Latency

: Failover

: Geolocation



http://realmojo.tistory.com/179





반응형


반응형

CloudFormation



What is CloudFormation?




One of the most powerful parts of AWS, CloudFormation allows you to take what was once traditional hardware infrastructure and convert it into code.


CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.


You don't need to figure our the order for provisioning AWS services or the subtleties of making those dependencies work. CloudFormation takes care of this for you.


After the AWS resources are deployed, you can modify and update them in a controlled and predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software.




CloudFormation Stack vs. Template


A CloudFormation Template is essentially an architectural diagram and a CloudFormation Stack is the end result of that diagram (i.e. what is actually provisioned).


You create, update, and delete a collection of resources by creating, updating, and deleting stacks using CloudFormation templates.


CloudFormation templates are in the JSON format or YAML.



Elements of A Template


Mandatory Elements

- List of AWS Resources and their associated configuration values


Optional Elements

- The template's file format & version number

- Template Parameters

  : The input values that are supplied at stack creation time. Limit of 60

- Output Values

  : The output values required once a stack has finished building (such as the public IP address, ELB address, etc.) Limit of 60.

- List of data tables

  : Used to look up static configuration values such AMI's etc.

  


Outputting Data


- You can use Fn:GetAtt to output data



Exam Tips


- By default, the "automatic rollback on error" feature is enabled

- You are charged for errors

- CloudFormation is free

- Stacks can wait for applications to be provisioned using the "WaitCondition"

- You can use Fn:GetAtt to output data

- Route53 is completely supported. This includes creating new hosted zones or updating existing ones.

- You can create A Records, Aliases etc.

- IAM Role Creation and Assignment is also supported.


1~2 questions in Exam


===========================



Cloud Formation Quiz


- The default scripting language for CloudFormation is : JSON

- Cloud Formation itself is free, however the resources it provisions will be charged at the usual rates. : True

- What happens if Cloud Formation encounters an error by default?

  : It will terminate and rollback all resources created on failure

- You are creating a virtual data center using cloud formation and you need to output the DNS name of your load balancer. What command would you use to achieve this?

  : FN::GetAtt

- What language are cloud formation templates written in? : JSON



======================================


Shared Responsibility Model



===========================


Shared Responsibility Model Quiz


- You are required to patch OS and Applications in RDS? : False

- In the shared responsibility model, what is AWS's responsibility?

  : Restricting access to the data centers, proper destruction of decommissioned disks, patching of firmware for the hardware on which your AWS resources reside



================================




DNS


What is DNS?


If you've used the internet, you've used DNS. DNS is used to convert human friendly domain names (such as http://acloud.guru) into an Internet Protocol (IP) address (such as http://82.124.53.1).


IP addresses are used by computers to identify each other on the network. IP addresses commonly come in 2 different forms, IPv4 and IPv6.



IPv4 vs. IPv6


The IPv4 space is a 32 bit field and has over 4 billion different addresses (4,294,967,296 to be precise).


IPv6 was created to solve this depletion issue and has an address space of 128 bits which in theory is

340,282,366,920,938,463,463,374,607,431,768,211,456 addresses or 340 undecillion addresses



Top Level Domains


If we look at common domain names such as google.com, bbc.co.uk. acloud,guru etc. you will notice a string of characters separated by dots (periods). The last word in a domain name represents the "top level domain". The second word in a domain name is known as a second level domain name (this is optional though and depends on the domain name).

.com, .edu, .gov, .co.uk, .gov.uk, .com.au


These top level domain names are controlled by the Internet Assigned Numbers Authority (IANA) in a root zone database which is essentially a database of all available top level domains. You can view this database by visiting

http://www.iana.org/domains/root/db



Domain Registrars


Because all of the names in a given domain name have to be unique there needs to be a way to organize this all so that domain names aren't duplicated. This is where domain registrars come in. A registrar is an authority that can assign domain names directly under one or more top-level domains. These domains are registered with InterNIC, a service of ICANN, which enforces uniqueness of domain names across the Internet. Each domain name becomes registered in a central database known as the WhoIS database.


Popular domain registrars include GoDaddy.com, 123-reg.co.uk etc.




SOA Records


The SOA record stores information about


- The name of the server that supplied the data for the zone.

- The administrator of the zone.

- The current version of the data file.

- The number of seconds a secondary name server should wait before checking for updates

- The number of seconds a secondary name server should wait before retrying a failed zone transfer

- The maximum number of seconds that a secondary name server can use data before it must either be refreshed or expire.

- The default number of seconds for the time-to-live file on resource records.



NS Records


NS stands for Name Server records and are used by Top Level Domain servers to direct traffic to the Content DNS server which contains the authoritative DNS records.



A Records


An 'A' record is the fundamental type of DNS record and the 'A' in A record stands for 'Address'. The A record is used by a computer to translate the name of the domain to the IP address. For example http://www.acloud.guru might point to http://123.10.10.80



TTL


The length that a DNS record is cached on either the Resolving Server or the users own local PC is equal to the value of the "Time To Live" (TTL) in seconds. The lower the time to live, the faster changes to DNS records take to propagate throughout the internet.



CNAMES


A Canonical Name (CName) can be used to resolve one domain name to another. For example, you may have a mobile website with the domain name http://m.acloud.guru that is used for when users browse to your domain name on their mobile devices. You may also want the name http://mobile.acloud.guru to resolve to this same address.



Alias Records


Alias records are used to map resource record sets in your hosted zone to Elastic Load Balancers, CloudFront distributions, or S3 buckets that are configured as websites.


Alias records work like a CNAME record in that you can map one DNS name (www.example.com) to another 'target' DNS name (elb1234.elb.amazonaws.com).


Key difference - A CNAME can't be used for naked domain names (zone apex record). You can't have a CNAME for http://acloud.guru, it must be either an A record or an Alias.


Alias resource record sets can save you time because Amazon Route 53 automatically recognizes changes in the record sets that the alias resource record set refers to.


For example, suppose an alias resource record set for example.com points to an ELB load balancer at lb1-1234.us-east-1.elb.amazonaws.com. If the IP address of the load balancer changes, Amazon Route 53 will automatically reflect those changes in DNS answers for example.com without any changes to the hosted zone that contains resource record sets for example.com.



Exam Tips


- ELB's do not have pre-defined IPv4 addresses, you resolve to them using a DNS name.

- Understand the difference between an Alias Record and a CNAME.

- Given the choice, always choose an Alias Record over a CNAME





반응형


반응형


SNS (Simple Notification Service)







Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud.


It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications.


Amazon SNS follows the "publish-subscribe" (pub-sub) messaging paradigm, with notifications being delivered to clients using a "push" mechanism that eliminates the need to periodically check or "poll" for new information and updates.


With simple APIs requiring minimal up-front development effort, no maintenance or management overhead and pay-as-you-go pricing, Amazon SNS gives developers an easy mechanism to incorporate a powerful notification system with their applications.


Push notifications to Apple, Google, Fire OS, and Windows devices, as well as Android devices in China with Baidu Cloud Push.


Besides pushing cloud notifications directly to mobile devices, Amazon SNS can also deliver notifications by SMS text message or email, to Amazon Simple Queue Service (SQS) queues, or to any HTTP endpoint.


To prevent messages from being lost, all messages published to Amazon SNS are stored redundantly across multiple availability zones.





SNS - Topics



SNS allows you to group multiple recipients using topics. A topic is an "access point" for allowing recipients to dynamically subscribe for identical copies of the same notification.


One topic can support deliveries to multiple endpoint types -- for example, you can group together iOS, Android and SMS recipients. When you publish once to a topic, SNS delivers appropriately formatted copies of your message to each subscriber.






SNS Benefits


- Instantaneous, push-based delivery (no polling)

- Simple APIs and easy integration with applications

- Flexible message delivery over multiple transport protocols

- Inexpensive, pay-as-you-go model with no up-front costs

- Web-based AWS Management Console offers the simplicity of a point-and-click interface




SNS vs. SQS


- Both Messaging Services in AWS

- SNS - Push

- SQS - Polls (Pulls)









SNS Pricing


- Users pay $0.50 per 1 million Amazon SNS Requests

- $0.06 per 100,000 Notification deliveries over HTTP

- $0.75 per 100 Notification deliveries over SMS

- $2.00 per 100,000 Notification deliveries over Email



SNS FAQ



==============


Creating SNS Topic





================



SNS Summary


- Instantaneous, push-based delivery (no polling)

- Protocols include

  : HTTP

  : HTTPS

  : Email

  : Email-JSON

  : Amazon SQS

  : Application

- Messages can be customized for each protocol



====================


SNS Quiz


- SNS is pull based rather than push based? : False

- Which of these is a protocol NOT supported by SNS

  HTTP, HTTPS, Email, Email-JSON, FTP, SQS, Application

  ==> The answer is FTP

- Messages cannot be customized for each protocol used in SNS? : False

- You have a list of subscribers email addresses that you need to push emails out to on a periodic bases. What do you subscribe them to? : A Topic

- You can use SNS in conjunction with SQS to fan a single message out to multiple SQS queues. : True





======================




AWS SWF (Simple Workflow Service)



Amazon Simple Workflow Service (Amazon SWF) is a web service that makes it easy to coordinate work across distributed application components. Amazon SWF enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a coordination of tasks.


Tasks represent invocations of various processing steps in an application which can be performed by executable code, web service calls, human actions, and scripts.



SWF Workers


Workers are programs that interact with Amazon SWF to get tasks, process received tasks, and return the results.



SWF Decider


The decider is a program that controls the coordination of tasks, i.e. their ordering, concurrency, and scheduling according to the application logic.






SWF Workers & Deciders


The workers and the decider can run on cloud infrastructure, such as Amazon EC2, or on machines behind firewalls. Amazon SWF brokers the interactions between workers and the decider. It allows the decider to get consistent views into the progress of tasks and to initiate new tasks in an ongoing manner.


At the same time, Amazon SWF stores tasks, assigns them to workers when they are ready, and monitors their progress. It ensures that a task is assigned only once and is never duplicated. Since Amazon SWF maintains the application's state durably, workers and deciders don't have to keep track of execution state. They can run independently, and scale quickly.




SWF Domains





Your workflow and activity types and the workflow execution itself are all scoped to a domain. Domains isolate a set of types, executions, and task lists from others within the same account.


You can register a domain by using the AWS Management Console or by using the RegisterDomain action in the Amazon SWF API.



The parameters are specified in JavaScript Object Notation (JSON) format.



How Long For workflow?


Maximum Workflow can be 1 year and the value is always measured in seconds.



SWF FAQ



SWF vs SQS


- Amazon SWF presents a task-oriented API, whereas 

  Amazon SQS offers a message-oriented API.

- Amazon SWF ensure that a task is assigned only once and is never duplicated. With Amazon SQS, you need to handle duplicated messages and may also need to ensure that a message is processed only once.

- Amazon SWF keeps track of all the tasks and events in an application. With Amazon SQS, you need to implement your own application-level tracking, especially if your application uses multiple queue.





===========================


SWF Quiz


- SWF consists of a domain, workers an deciders? : True

- Maintaining your application's execution state (e.g. which steps have completed, which ones are running, etc.) is a perfect use case for SWF. : True

- Amazon SWF is useful for automating workflows that include long-running human task (e.g. approvals, reviews, investigations, etc.) Amazon SWF reliably tracks the status of processing steps that run up to several days or months. : True

- In Amazon SWF what is a worker? 

  : Workers are programs that interact with Amazon SWF to get tasks, process received tasks, and return the results

- In Amazon SWF what is a decider

  : The decider is a program that controls the coordination of tasks, i.e. their ordering, concurrency, and scheduling according to the application logic.

  




  

============




Elastic Beanstalk (*** 4~5 questions in the Exam)







- With Elastic Beanstalk, you can deploy, monitor, and scale an application quickly

- It provides developers or end users with the ability to provision application infrastructure is an almost transparent way.

- It has a highly abstract focus towards infrastructure, focusing on components and performance - not configuration and specifications

- It attempts to remove, or significantly simplify infrastructure management, allowing applications to deployed into infrastructure environments easily.





Beanstalk key architecture components


- Applications are the high level structure in beanstalk

- Either your entire application, is one EB application, or

- Each logical component of your application, can be a EB application or a EB environment within an application


- Applications can have multiple environments (Prod, Staging, Dev, V1, V2, V1.1 or functional type (front-end, back-end)

- Environments are either single instance or scalable

- Environments are either web server environments or worker environments


- Application Versions are unique packages which represent versions of apps.

- An application is uploaded to Elastic beanstalk as a application bundle - .zip

- Each application can have many versions 1:M relationship

- Application versions can be deployed to environments within an Application




Elastic Beanstalk Exam Tips


- You can have multiple versions of your applications

- Your applications can be split in to tiers (Web Tier/Application Tier/Database Tier)

- You can update your application

- You can update your configuration

- Updates can be 1 instance at a time, a % of instances or an immutable update

- You pay for the resources that you use, but Elastic Beanstalk is free

- If elastic beanstalk creates your RDS database then it will delete it when you delete your application. If not then the RDS instance stays 

- Know what languages are supported


- Apache Tomcat for Java application

- Apache HTTP Server for PHP applications

- Apache HTTP Server for Python applications

- Nginx or Apache HTTP Server for Node.js applications

- Passenger or Puma for Ruby applications 

- Microsoft IIS 7.5, 8.0, and 8.5 for .NET applications

- JAVA SE

- Docker

- Go




==============================


Elastic Beanstalk Quiz


- Elastic Beanstalk is object based storage. : False

- What languages and development stacks is NOT supported by AWS Elastic Beanstalk?

  : Jetty for jbos application 

- Unlike Cloud Formation, Elastic Beanstalk itself is not free AND you must also pay for the resources it provisions. : False




Elastic Beanstalk FAQ



=====================================



반응형


반응형

Simple Queue Service (SQS) ***






Amazon SQS is a web service that gives you access to a message queue that can be used to store messages while waiting for a computer to process them.

Amazon SQS is a distributed queue system that enables web service applications to quickly and reliably queue messages that one component in the application generates to be consumed by another component. A queue is a temporary repository for messages that are awaiting processing.



Using Amazon SQS, you can decouple the components of an application so they run independently, with Amazon SQS easing message management between components. Any component of a distributed application can store messages in a fail-safe queue.



Messages can contain up to 256 KB (***) of text in any format. Any component can later retrieve the messages programmatically using the Amazon SQS API.



The queue acts as a buffer between the component producing and saving data, and the component receiving the data for processing.




This means the queue resolves issues that arise if the producer is producing work faster than the consumer can process it, or if the producer or consumer are only intermittently connected to the network.



Amazon SQL ensures delivery of each message at least once, and supports multiple readers and writers interacting with the same queue.



A single queue can be used simultaneously by many distributed application components, with no need for those components to coordinate with each other to share the queue.



Amazon SQS is engineered to always be available and deliver messages. One of the resulting tradeoffs is that SQS does not guarantee first in, first out delivery of messages. For many distributed applications, each message can stand on its own, and as long as all messages are delivered, the order is not important.



If your system requires that order be preserved, you can place sequencing information in each message, so that you can reorder the messages when the queue returns them.



To illustrate, suppose you have a number of image files to encode. In an Amazon SQS worker queue, you create an Amazon SQS message for each file specifying the command (jpeg-encode) and the location of the file in Amazon S3.



A pool of Amazon EC2 instances running the needed image processing software does the following





SQS Exam Tips


1. Asynchronously pulls the task messages from the queue

2. Retrieves the named file

3. Processes the conversion

4. Write the image back to Amazon S3

5. Writes a "task complete" message to another queue

6. Delete the original task message

7. Checks for more messages in the worker queue




Autoscaling






- Does not offer FIFO

- 12 hours visibility time out

- Amazon SQS is engineered to provide "at least once" delivery of all messages in its queues. Although most of the time each message will be delivered to your application exactly once, you should design your system so that processing a message more than once does not create any errors or inconsistencies.

- 256kb message size now available

- Billed at 64 kb "chunks"

- A 256kb message will be 4 X 64kb "chunks"




SQL Pricing


- First 1 million Amazon SQS Requests per month are free

- $0.50 per 1 million Amazon SQS Requests per month thereafter ($0.00000050 per SQS Request)

- A single request can have from 1 to 10 messages, up to a maximum total payload of 256KB.

- Each 64KB 'chunk' of payload is billed as 1 request. For example, a single API call with a 256KB payload will be billed as four requests.





=========================================


SQS Developer Exam Tips


SQS - Delivery


  SQS Messages can be delivered multiple times and in any order.



SQS - Default Visibility Time Out


  Default Visibility Time Out is 30 seconds


  Maximum Time Out is 12 Hours



When you receive a message from a queue and begin processing it, you may find the visibility timeout for the queue is insufficient to fully process and delete that message. To give yourself more time to process the message, you can extend its visibility timeout by using the ChangeMessageVisibility action to specify a new timeout value. Amazon SQS restarts the timeout period using the new value.





SQS Long Polling


SQS long polling is a way to retrieve messages from your SQS queues. While the traditional SQS short polling returns immediately, even if the queue being polled is empty, SQS long polling doesn't return a response until a message arrives in the queue, or the long poll times out. SQS long polling makes it easy and inexpensive to retrieve messages from your SQS queue as soon as they are available.


Maximum Long Poll Time Out = 20 seconds





Example Questions


Polling in a tight loops is burning CPU cycles and costing the company money. How would you fix this? - To enable the long polling



SQS - Fanning Out


Create an SNS topic first using SNS. Then create and subscribe multiple SQS queues to the SNS topic.


Now whenever a message is sent to the SNS topic, the message will be fanned out to the SQS queues, i.e. SNS will deliver the message to all the SQS queues that are subscribed to the topic.




==========================




SQS Quiz


- SQS was the first service on the AWS platform? - true

- How large can an SQS message be? - 256kb

- What is the default visibility time out setting? - 30 seconds

- An SQS message can be delivered multiple times - True

- You are designing a new application which involves processing payments and delivering promotional emails to customers. You plan to use SQS to help facilitate this. You need to ensure that the payment process takes priority over the creation and delivery of emails. What is the best way to achieve this.

  : Use 2 SQS queues for the platform. Have the EC2 fleet poll the payment SQS queue first. If this queue is empty, then poll the promotional emails queue.

- Your EC2 instance download jobs from the SQS queue, however they are taking too long to process them. What API call can you use to extend the length of time to process the jobs? : ChangeMessageVisibility

- What is the default visibility time out? : 30 seconds

- You have a fleet of EC2 instances that are constantly polling empty SQS queues which is burning CPU compute cycles and costing your company money. What should you do?

  : Enable SQS Long Polling

- What is the maximum long poll time out : 20 seconds

- What amazon service can you use in conjunction with SQS to 'fan out' SQS messages to multiple queues : SNS



========================================





반응형


반응형

DynamoDB Summary



What is DynamoDB?


Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications.



Quick facts about DynamoDB


- Stored on SSD storage

- Spread Across 3 geographically distinct data centers


- Eventual Consistent Reads (Default)

  : Consistency across all copies of data is usually reached within a second. Repeating a read after a short time should return the updated data. (Best Read Performance)

  

- Strongly Consistent Reads

  : A strongly consistent read returns a result that reflects all writes that received a successful response prior to the read

  

  

The Basics


- Tables

- Items (Think a row of data in table)

- Attributes (Think of a column of data in a table)



DynamoDB - Primary Keys





Two Types of Primary Keys Avaliable


- Single Attribute (think unique ID)

  : Partition Key (Hash Key) composed of one attribute

  

- Composite (think unique ID and a date range)

  : Partition Key & Sort Key (Hash & Range) composed of two attributes.



  

- Partition Key

  : DynamoDB uses the partition key's value as input to an internal hash function. The output from the hash function determines the partition (this is simply the physical location in which the data is stored).

  : No two items in a table can have the same partition key value!

  

- Partition Key and Sort key

  : DynamoDB uses partition key's value as input to an internal hash function. The output from the hash function determines the partition (this is simply the physical location in which the data is stored)

  : Two items can have the same partition key, but they must have different sort key.

  : All items with the same partition key are stored together, in sorted order by sort key value.

  


DynamoDB - Indexes


- Local Secondary Index

  : Has the SAME partition key, different sort key.

  : Can ONLY be created when creating a table. They cannot be removed or modified later.



  

- Global Secondary Index

  : Has DIFFERENT Partition key and different sort key

  : Can be created at table creation or added LATER



  

  

DynamoDB - Streams


- Used to capture any kind of modification of the DynamoDB tables.

  : If a new item is added to the table, the stream captures an image of the entire item, including all of its attributes

  : If an item is updated, the stream captures the "before" and "after" image of any attributes that were modified in the item

  : If an item is deleted from the table, the stream captures an image of the entire item before it was deleted



  

  

Query & Scans Exam Tips


- A Query operation finds items in a table using only primary key attribute values. You must provide a partition key attribute name and a distinct value to search for.

- A Scan operation examines every item in the table. By default, a Scan returns all of the data attributes for every item, however, you can use the ProjectionExpression parameter so that the Scan only returns some of the attributes, rather than all of them.

- Try to use a query operation over a Scan operation as it is more efficient







Example 1


You have an application that requires to read 5 items of 10 KB per second using eventual consistency. What should you wet the read throughput to?


- First we  calculate how many read units per item we need

- 10 KB rounded up to nearest increment of 4 KB is 12 KB

- 12 KB / 4 KB = 3 read units per item


- 3 X 5 read items = 15

- Using eventual consistency we get 15/2 = 7.5


- 8 units of read throughput



Example 2 - Write THroughput


You have an application that requires to write 12 items of 100 KB per item each second. What should you set the write throughput to?


- Each write unit consist of 1 KB of data. You need to write 12 items per second with each item having 100 KB of data

- 12 X 100 KB = 1200 write units

- Write throughput of 1200 units



Erro Codes


400 HTTP Status Code - 

ProvisionedThroughputExceededException


You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes.


 

Steps taken to authenticate


1. User Authenticates with ID provider (such as Facebook)

2. They are passed a Token by their ID provider

3. Your code calls AssumeRoleWithWebIdentity API and provides the providers token and specifies the ARN for the IAM Role

4. App can now access Dynamodb from between 15 minutes to 1 hour (default is 1 hour)



Conditional Writes.


If item = $10 then update to $12


Note that conditional writes are idempotent. This means that you can send the same conditional write request multiple times, but it will have no further effect on the item after the first time DynamoDB performs the specified update. For example, suppose you issue a request to update the price of a book item by 10%, with the expectation that the price is currently $20.

However, before you get a response, a network error occurs and you don't know whether your request was successful or not. Because a conditional update is an idempotent operation, you can send the same request again. and DynamoDB will update the price only if the current price is still $20.



Atomic Counters


DynamoDB supports atomic counters, where you use the UpdateItem operation to increment or decrement the value of an existing attribute without interfering with other write requests. (All write requests are applied in the order in which they were received.) For example, a web application might want to maintain a counter per visitor to their site. In this case, the application would need to increment this counter regardless of its current value.


Atomic Counter updates are not idempotent. This means that counter will increment each time you call UpdateItem. If you suspect that a previous request was unsuccessful, your application could retry the updateItem operation, however, this would risk updating the counter twice. This might be acceptable for a web site counter, because you can tolerate with slightly over- or under- counting the visitors. However, in a banking application, it would be safer to use a conditional update rather than an atomic counter.



Batch Operations


If your application needs to read multiple items, you can use the BatchGetItem API. A single BatchGetItem request can retrieve up to 1 MB of data, which can contain as many as 100 items, In addition, a single BatchGetItem request can retrieve items from multiple tables.



****** READ THE DYNAMODB FAQ ******


If you read one FAQ in preparing for this course, make sure it's the DynamoDB FAQ!!!!!





======================================


DynamoDB Quiz


- DynamoDB is a No-SQL database provided by AWS. - True

- You have a motion sensor which writes 600 items of data every minute. Each item consists of 5kb. Your application uses eventually consistent reads. What should you set the read throughput to ?

  : 600/60 = 10 items per second. 5kb rounded up = 8kb 8/4 = 2. 

  : 2 read per item. 2 X 10 = 20 reads per second. 

  : As the reads are Eventually consistent, 20/2 = 10

  ==> The answer is 10

- A scan is more efficient than a query in terms of performance - False

- What does the error "ProvisionedThroughputExceededException" mean in DynamoDB?

  : You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes.

- You have a motion sensor which writes 600 items of data every minute. Each item consists of 5 kb. what should you set the write throughput to? : 50

- What is the API call to retrieve multiple items from a DynamoDB table?

  : BatchGetItem

- You have a motion sensor which writes 600 items of data every minute. Each item consists of 5kb. Your application uses strongly consistent read. What should you set read throughput to? 

  : 600/60 = 10 items per second

  : 5kb rounded to nearest 4 kb chunk is 8kb. 8/4 = 2 reads per item

  : 2 X 10 reads per second.

  ==> The answer is 20

- Using the AWS portal, you are trying to Scale DynamoDB past its preconfigured maximums. Which service can you increase by raising a ticket to AWS support?

  : http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html

  ==> Provisioned throughput limits

 - You have an application that needs to read 25 items of 13 kb in size per second. Your application uses eventually consistent reads. What should you set the read throughput to?

   : 13 kb - roundup to the nearrest 4kb = 16 kb. 16/4 = 4 reads per item

   : 25 items X 4 = 100

   : 100 / 2 = 50 (eventually consistent reads)

   ==> The answer is 50

- You have an application that needs to read 25 items of 13 kb in size per second. Your application uses strongly consistent reads. What should you set the read throughput to?

   : 13 kb - roundup to the nearrest 4kb = 16 kb. 16/4 = 4 reads per item

   : 25 items X 4 = 100 (Strongly consistent reads)

   ==> The answer is 100

   


=======================================




반응형

[AWS Certificate] Developer - DynamoDB memo

2017. 11. 14. 09:57 | Posted by 솔웅


반응형

DynamoDB from CloudGuru lectures



=====================================================

============= DynamoDB ====================

=====================================================



What is DynamoDB? (***********)






Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, IoT, and many other applications.







Quick facts about DynamoDB


- Stored on SSD storage

- Spread across 3 geographically distinct data centers


- Eventual Consistent Reads (Default)

  : Consistency across all copies of data is usually reached within a second. Repeating a read after a short time should return the updated data. (Best Read Performance)

  

- Strongly Consistent Reads

  : A strongly consistent read returns a result that reflects all writes that received a successful response prior to the read.

  


The Basics


- Tables

- Items (Think a row of data in table)

- Attributes (Think of a column of data in a table)



Pricing


- Provisioned THroughput Capacity

  : Write Throughput $0.0065 per hour for every 10 units

  : Read Throughput $0.0065 per hour for every 50 units

  

- First 25 GB stored per month is free

- Storage costs of $0.25 GB per month there after.


Pricing Example


Let's assume that your application needs to perform 1 million writes and 1 million reads per day, while storing 28 GB of data.


First, you need to calculate how many writes and reads per seconds you need. 1 million evenly spread writes per day is equivalent to 1,000,000 (writes) / 24 (hours) / 60 (minutes) / 60 (seconds) = 11.6 writes per second.


A dynamoDB Write capacity unit can handle 1 write per second, so you need 12 write capacity units. For write throughput, you are charged on $0.0065 for every 10 units.


So ($0.0065/10) * 12 * 24 = $0.1872 per day.


Similarly, to handle 1 million strongly consistent reads per day, you need 12 read capacity units. For read throughput you are charged $0.0065 for every 50 units.


So ($0.0065/50) * 12 * 24 = $0.0374 per day.


Storage costs is $0.25 per GB per month. Lets assume our database is 28 GB. We get the first 25 GB for free so we only pay for 3 GB of storage which is $0.75 per month.


Total Cost = $0.1872 per day + $0.0374 per day Plus Storage of 0.75 per month


(30 X ($0.1872 + $0.0372)) $0.75 = $7.488


With free tier you get

25 read capacity units

25 write capacity units


Easiest way to learn DynamoDB?


- Let's start our first Lab


======================================================


Creating a DynamoDB Table


Create a Role - Dynamo full access

Create a instance - Assign the Role to the instance


#!/bin/bash

yum update -y

yum install httpd24 php56 git -y

service httpd start

chkconfig httpd on

cd /var/www/html

echo "<?php phpinfo();?>" > test.php

git clone https://github.com/acloudguru/dynamodb



1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@52.91.230.105 -i EC2KeyPair.pem.txt 

The authenticity of host '52.91.230.105 (52.91.230.105)' can't be established.

ECDSA key fingerprint is SHA256:Zo4LcW4QASmSaf4H4kg5ioPGeqLicxV8TsJ+/JTQVj0.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '52.91.230.105' (ECDSA) to the list of known hosts.


       __|  __|_  )

       _|  (     /   Amazon Linux AMI

      ___|\___|___|


https://aws.amazon.com/amazon-linux-ami/2017.09-release-notes/

[ec2-user@ip-172-31-85-82 ~]$ sudo su

[root@ip-172-31-85-82 ec2-user]# cd /var/www/html

[root@ip-172-31-85-82 html]# ls

dynamodb  test.php

[root@ip-172-31-85-82 html]# curl -sS https://getcomposer.org/installer | php

All settings correct for using Composer

Downloading...


Composer (version 1.5.2) successfully installed to: /var/www/html/composer.phar

Use it: php composer.phar


[root@ip-172-31-85-82 html]# php composer.phar require aws/aws-sdk-php

Do not run Composer as root/super user! See https://getcomposer.org/root for details

Using version ^3.38 for aws/aws-sdk-php

./composer.json has been created

Loading composer repositories with package information

Updating dependencies (including require-dev)

Package operations: 6 installs, 0 updates, 0 removals

  - Installing mtdowling/jmespath.php (2.4.0): Downloading (100%)         

  - Installing psr/http-message (1.0.1): Downloading (100%)         

  - Installing guzzlehttp/psr7 (1.4.2): Downloading (100%)         

  - Installing guzzlehttp/promises (v1.3.1): Downloading (100%)         

  - Installing guzzlehttp/guzzle (6.3.0): Downloading (100%)         

  - Installing aws/aws-sdk-php (3.38.0): Downloading (100%)         

guzzlehttp/guzzle suggests installing psr/log (Required for using the Log middleware)

aws/aws-sdk-php suggests installing aws/aws-php-sns-message-validator (To validate incoming SNS notifications)

aws/aws-sdk-php suggests installing doctrine/cache (To use the DoctrineCacheAdapter)

Writing lock file

Generating autoload files

[root@ip-172-31-85-82 html]# cd dynamodb

[root@ip-172-31-85-82 dynamodb]# ls -l

total 24

-rw-r--r-- 1 root root  4933 Nov  9 00:32 createtables.php

-rw-r--r-- 1 root root    11 Nov  9 00:32 README.md

-rw-r--r-- 1 root root 11472 Nov  9 00:32 uploaddata.php

[root@ip-172-31-85-82 dynamodb]# nano createtables.php

==> update the Region info - create and update php




http://52.91.230.105/dynamodb/createtables.php


==> will create 4 dynamoDB tables


==> 

Creating table ProductCatalog... Creating table Forum... Creating table Thread... Creating table Reply... Waiting for table ProductCatalog to be created. Table ProductCatalog has been created. Waiting for table Forum to be created. Table Forum has been created. Waiting for table Thread to be created. Table Thread has been created. Waiting for table Reply to be created. Table Reply has been created.


Picture : DynamoDBCreated


http://52.91.230.105/dynamodb/uploaddata.php





===============================================


DynamoDB Indexes & Streams


* Primary Keys


Tow Types of Primary Keys available

- Single Attribute (think unique ID)

  : Partition Key (Hash Key) composed of one attribute


- Composite (think unique ID and a date range)

  : Partition Key & Sort Key (Hash & Range) composed of two attributes

  

Partition Key

- DynamoDB uses the partition key's value as input to an internal hash function. The output from the hash function determines the partition (this is simply the physical location in which the data is stored)

- No two items in a table can have the same partition key value (*****)


Partition Key and Sort key

- DynamoDB uses the partition key's value as input to an internal hash function. the output from the hash function determines the partition (this is simply the physical location in which the data is stored)

- Two items can have the same partition key, but they must have a different sort key

- All items with the same partition key are stored together, in sorted order by sort key value


* Indexes (***)


Local Secondary Index

- Has the SAME Partition key, different sort key

- Can ONLY be created when creating a table. They cannot be removed or modified later.


Global Secondary Index

- Has DIFFERENT Partition key and different sort key

- Can be created at table creation or added LATER


Used to capture any kind of modification of the DynamoDB tables

- If a new item is added to the table, the stream captures an image of the entire item, including all of its attributes

- If an item is updated, the stream captures the "before" and "after" image of any attributes that were modified in the item

- If an item is deleted from the table, the stream captures an image of the entire item before it was deleted


DynamoDB Streams






Practice - Tabs

Overview, Items, Metrics, Alarms, Capacity, Indexes, Triggers, Access control, Tags


=========================================


Scan vs. Query API Calls



What is a Query?


- A Query operation finds items in a table using only primary key attribute values. You must provide a partition attribute name and a distinct value to search for.


- You can optionally provide a sort key attribute name and value, and use a comparison operator to refine the search results.


- By default, a Query returns all of the data attributes for items with the specified primary key(s); however, you can use the ProjectionExpression parameter so that the Query only returns some of the attributes, rather than all of them


- Query results are always sorted by the sort key. If the data type of the sort key is a number, the results are returned in numeric order. otherwise, the results are returned in order of ASCII character code values. By default, the sort order is ascending. To reverse the order set the ScanIndexForward parameter to false.


- By Default is eventually consistent but can be changed to be strongly consistent.



What is a Scan?


- A Scan operation examines every item in the table. By default, a Scan returns all of the data attributes for every item. however, you can use the ProjectionExpression parameter so that the Scan only returns some of the attributes, rather than all of them


What should I use? Query vs. Scan?


Generally, a Query operation is more efficient than a Scan operation.


A Scan operation always scans the entire table, then filters out values to provide the desired result, essentially adding the extra step of removing data from the result set. Avoid using a Scan operation on a large table with a filter that removes many results, if possible. Also, as a table grows, the Scan operation slows. The Scan operation examines every item for the requested values, and can use up the provisioned throughput for a large table in a single operation


For quicker response times, design your tables in a way that can use the Query, Get, or BatchGetItem APIs, instead. Alternatively, design your application to use Scan operations in a way that minimizes the impact on your table's request rate.





Query & Scans Exam Tips


- A Query operation finds items in a table using only primary key attribute values. You must provide a partition key attribute name and a distinct value to search for


- A Scan operation examines every item in the table. By default, a Scan returns all of the data attributes for every item. however, you can use the ProjectionExpression parameter so that the Scan only returns some of the attributes, rather than all of them


- Query results are always sorted by the sort key in ascending order. Set ScanIndexForward parameter to false to reverse it.


- Try to use a query operation over a Scan operation as it is more efficient


=======================================


DynamoDB Provisioned Throughput Calculations (***)


- Unit of Read provisioned throughput

  : All reads are rounded up to increments of 4KB

  : Eventually Consistent Reads (default) consist of 2 reads per second

  : Strongly Consistent Reads consist of 1 read per second

  

- Unit of Write provisioned throughput

  : All writes are 1 KB

  : All writes consist of 1 write per second

  

The Magic Formula


Question 1 - You have an application that requires to read 10 items of 1 KB per second using evnetual consistency. What should you set the read throughput to?


(Size of Read rounded to nearest 4 KB chunk/ 4KB) X no of items = read throughput


Divide by 2 if eventually consistent


- First we calculate how many read units per item we need


- 1 KB rounded to the nearest 4 KB increment = 4

- 4 KB / 4KB = 1 read unit per item


- 1 X 10 read items = 10

- Using eventual consistency we get 10 / 2 = 5

- 5 units of read throughput



Question 2

You have an application that requires to read 10 items of 6 KB per second using eventual consistency. What should you set the read throughput to?


- First we calculate how many read units per item we need

- 6 KB rounded up to nearest increment of 4 KB is 8 KB

- 8 KB / 4 KB = 2 read units per item


- 2 X 10 read items = 20

- Using eventual consistency we get 20 / 2 = 10


- 10 units of read throughput



Question 3


You have an application that requires to read 5 items of 10 KB per second using eventual consistency. What should you set the read throughput to?


- First we calculate how many read units per item we need

- 10 KB rounded up to nearest increment of 4 KB is 12 KB

- 12 KB / 4 KB = 3 read units per item.


- 3 X 5 read items = 15

- Using eventual consistency we get 15 / 2 = 7.5


- 8 units of read throughput



Question 4 - STRONG CONSISTENCY


You have an application that requires to read 5 items of 10 KB per second using strong consistency. What should you set the read throughput to?


- First we calculate how many read units per item we need 

- 10 KB rounded up to nearest increment of 4 KB is 12 KB

- 12 KB / 4 KB = 3 read units per item


- 3 X 5 read items = 15

- Using strong consistency we Don't divide by 2


- 15 units of read throughput



Question 5 - WRITE THROUGHPUT


You have an application that requires to write 5 items, with each item being 10 KB in size per second. What should you set the write throughput to?


- Each write unit consist of 1 KB of data. You need to write 5 items per second with each item using 10 KB of data


- 5 X 10 KB = 50 write units


- Write throughput of 50 Units



Question 6 - WRITE THROUGHPUT


You have an application that requires to write 12 items of 100 KB per item each second. What should you set the write throughput to?


- Each write unit consist of 1 KB of data. You need to write 12 items per second with each item having 100 KB of data.


- 12 X 100 KB = 12 write units


- Write throughput of 1200 Units



Error Code


400 HTTP Status Code - ProvisionedTHroughputExceededException


You exceeded your maximum allowed provisioned throughput for a table or for one or more global secondary indexes.





========================================





Using Web Identity Providers with DynamoDB


Web Identity Providers


You can authenticate users using Web Identity providers (such as Facebook, Google, Amazon or any other Open-ID Connect-compatible Identity provider). This is done using AssumeRoleWithWebIdentity API.


You will need to create a role first.


1. Web Identity Token

2. App ID of provider

3. ARN of Role

a. AccessKeyID

   SecretAccessKey

   SessionToken

b. Expiration (time limit)

c. AssumeRoleID

d. SubjectFromWebIdentityToken

(the unique ID that appears in an IAM policy variable for this particular identity provider)



Steps taken to authenticate


1. User Authenticates with ID provider (such as facebook)

2. They are passed a Token by their ID provider

3. Your code calls AssumeRoleWithWebIdentity API and provides the providers token and specifies the ARN for the IAM Role

4. App can now access Dynamodb from between 15 minutes to 1 hour (default is 1 hour)


========================================


Other important aspects of DynamoDB


Conditional Writes




If item = $10 then update to $12


Note that conditional writes are idempotent. This means that you can send the same conditional write request multiple times, but it will have no further effect on the item after the first time DynamoDB performs the specified update. For example, suppose you issue a request to update the price of a book item by 10%, with the expectation that the price is currently $20. However, before you get a response, a network error occurs and you don't know whether your request was successful or not. Because a conditional update is an idempotent operation, you can send the same request again. and DynamoDB will update the price only if the current price is still $20.



Atomic Counters


DynamoDB supports atomic counters, where you use UpdateItem operation to increment or decrement the value of an existing attribute without interfering with other write requests. (All write requests are applied in the order in which they were received.) For example, a web application might want to maintain a counter per visitor to their site. In this case, the application would need to increment this counter regardless of its current value.



Batch Operations


If your application needs to read multiple items, you can use the BatchGetItem API. A single BatchGetItem request can retrieve up to 1 MB of data, which can contain as many as 100 items. In addition, a single BatchGetItem request can retrieve items from multiple tables.



===============================================




반응형


반응형





Join us from the comfort of your home or office. Register for the AWS re:Invent live streams.

Live Stream Agenda** 
Tuesday Night Live | Tuesday, Nov. 28 | 8:00 PM – 9:30 PM PT
Peter DeSantis, VP, AWS Global Infrastructure

Keynote | Wednesday, Nov. 29 | 8:00 AM – 10:30 AM PT 
Andy Jassy, CEO, Amazon Web Services 

Keynote | Thursday, Nov. 30 | 8:30 AM – 10:30 AM PT 
Werner Vogels, Chief Technology Officer, Amazon.com 

Additional Coverage on Twitch 
For additional coverage and to join in the conversation, tune in to www.twitch.tv/aws, where we will be live streaming keynote recaps, interviews with AWS experts and community leaders, and demos of new product launches. Share your thoughts and get your questions answered during these interactive live streams. For more information on the additional live stream coverage visit https://aws.amazon.com/twitch.

AWS re:Invent Live Stream Sponsored by Intel
Intel invents technology making amazing experiences possible. Powering devices & the cloud you depend on. Visit aws.amazon.com/intel for more information. 




Sincerely,

The Amazon Web Services Team

**Please note that the live stream will be in English only.











반응형


반응형

CloudGuru (Udemy lecture)


AWS Certified Developer - Associate 2017



================================================================

============= Databases Overview & Concepts ====================

================================================================


Database 101



This section is not so much in the Exam. This is just for fundamental knowledge on Database.

(DynamoDB is mostly in the Exam and we will learn it from next article)



What is Relational database?



Relational databases are what most of us are all used to. They have been around since the 70's. Think of a traditional spreadsheet

- Database

- Table

- Row

- Fields (Columns)


Relational Database Types

- SQL Server

- Oracle

- MySQL Server

- PostgreSQL

- Aurora

- MariaDB



Non Relational Databases


- Database

  : Collection ==> Table

  : Document ==> Row

  : Key Value Pairs ==> Fields



JSON/NoSQL


Sample






What is Data Warehousing?


Used for business intelligence. Tools like cognos, jaspersoft, SQL Server Reporting Services, Oracle Hyperion, SAP NetWeaver.


Used to pull in very large and complex data sets. Usually used by management to do queries on data (such as current performance vs. targets etc.)



OLTP vs. OLAP


Online Transaction Processing (OLTP) differs from OLAP Online Analytics Processing (OLAP) in terms of the types of queries run.




OLTP Example:


Order number 2120121

Pulls up a row of data such as Name, Date, Address to Deliver to , Delivery Status etc.


OLAP


OLAP transaction Example:

Net Profit for EMEA and pacific for the Digital Radio Product.

Pulls in large numbers of records


Sum of Radios Sold in EMEA

Sum of Radios Sold in Pacific

Unit Cost of Radio in each region

Sales price of each radio

Sales price - unit cost.


Data Warehousing databases use different type of architecture both from a database perspective and infrastructure layer.



What is Elasticache?




ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud. The service improves the performance of web applications by allowing you to retrieve information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases. ElasticCache supports two open-source in-memory caching engines:

- Memcached

- Redis



What is DMS?




Announced at re:Invent 2015, DMS stands for Database Migration Service.

Allows you to migrate your production database to AWS. Once the migration has started, AWS mansges all the complexities of the migration process like data type transformation, compression, and parallel transfer (for faster data transfer) while ensuring that data changes to the source database that occur during the migration process are automatically replicated to the target.


AWS schema conversion tool automatically converts the source database schema and a majority of the custom code, including views, stored procedures, and functions, to a format compatible with the target database.





AWS Database Types - Summary


RDS - OLTP

  : SQL

  : MySQL

  : PostgreSQL

  : Oracle

  : Aurora

  : MariaDB


DynamoDB - No SQL

Redshift - OLAP

Elasticache - In Memory Caching

  : Memcached

  : Redis

DMS





반응형