Udemy - Amazon Web Services (AWS) Certified - 4 Certifications!
2020. 4. 3. 08:57 |
Amazon Web Services (AWS) Certified - 4 Certifications!
Videos, labs & practice exams - AWS Certified (Solutions Architect, Developer, SysOps Administrator, Cloud Practitioner)
4.5 (13,383 ratings)
82,064 students enrolled
Created by BackSpace Academy
Last updated 3/2020
English
English [Auto-generated], Italian [Auto-generated], 2 more
Section 2: AWS Certified Developer Associate Quiz
The ec2-net-utils package is installed on Amazon Linux instances only. See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html
Question 1:
After you assign a secondary private IPv4 address to your instance, you need to configure the operating system on your instance to recognize the secondary private IP address. If you are using an Ubuntu Linux instance, the ec2-net-utils package can take care of this step for you.
- True
- False v
A queue name can have up to 80 characters. The following characters are accepted: alphanumeric characters, hyphens (-), and underscores (_). Queue names are case-sensitive.
Question 2:
Test-queue and test-queue are different queue names.
- True v
- False
See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html
Question 3:
You want to host multiple secure websites on a single EC2 server using multiple SSL certificates. How can you achieve this?
- Assign a secondary private IPv4 address to a second attached network interface. Associate an elastic IP address with the private IPv4 address. v
- Assign a secondary public IPv4 address to a second attached network interface. Associate an elastic IP address with the public IPv4 address.
- Assign a secondary private IPv6 address to a second attached network interface. Associate an elastic IP address with the private IPv6 address.
- Assign a secondary public IPv6 address to a second attached network interface. Associate an elastic IP address with the public IPv6 address.
- None of the above
The application must be packaged using the CLI package command and deployed using the CLI deploy command. See: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html https://docs.aws.amazon.com/cli/latest/reference/cloudformation/deploy/index.html https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-quick-start.html
Question 4:
You have created and tested an example Lambda Node.js application from the AWS Serverless Application Repository. What are the next steps?
- Cloudformation CLI package and deploy commands v
- Cloudformation CLI create-stack and update-stack commands
- Cloudformation CLI package-stack and deploy-stack commands
- Cloudformation CLI create-change-set and deploy-change-set
See: https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html
Question 5:
You have an Amazon Kinesis Stream that is consuming records from an application. The kinesis Stream consists of multiple shards. A Lambda function will process the records from the Kinesis Stream. What order will the records be processed.
- In the exact order it is received by the kinesis Stream on a FIFO basis
- In the exact order it is received by each Kinesis shard on a FIFO basis. Order across shards is not guaranteed. V
- A standard kinesis stream does not have a guaranteed order. A FIFO kinesis stream will have the exact order it is received on a FIFO basis.
- A standard kinesis stream does not have a guaranteed order. A LIFO kinesis stream will have the exact order it is received on a LIFO basis.
AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Using Step Functions, you can design and run workflows that stitch together services such as AWS Lambda and Amazon ECS into feature-rich applications. Workflows are made up of a series of steps, with the output of one step acting as input into the next. Application development is simpler and more intuitive using Step Functions, because it translates your workflow into a state machine diagram that is easy to understand, easy to explain to others, and easy to change.
Question 6:
You have an application that requires coordination between serverless and server based distributed applications. You would like to implement this as a state machine. What AWS service would you use?
- SQS and SNS
- AWS Step Functions v
- EC2 and SNS
- AWS Amplify
Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects. See: http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
Question 7:
You have enabled server side encryption on an S3 bucket. How do you decrypt objects?
- The key will be located in the KMS
- The key can be accessed from the IAM console.
- S3 automatically decrypts objects when you download them. v
- None of the above
RDS does not support autoscaling or load balancers Multi AZ deployment only affects availability Increasing the rds instance size increases write and read capacity Read replicas will increase the read capacity. Each read replica will have a different connection string. Route53 can be used to route requests to different instances each time.
Question 8:
You would like to increase the capacity of an rds application for read heavy workloads. How would you do this?
- Create an rds auto scaling group and load balancer
- Use multi AZ deployment
- Increase the size of the rds instance
- Add read replicas with multiple connection strings and use Route 53 Multivalue Answer Routing. v
Bucket policies can only be applied at the bucket level not objects. You can although change object permissions using access control lists (ACLs). See: http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html
Question 9:
How can you apply an S3 bucket policy to an object?
- Use the CLI --grants option
- Use the CLI --policy option
- Use the CLI --permissions option
- None of the above v
User data is specified by you at instance launch. Instance metadata is data about your instance. Because your instance metadata is available from your running instance, you do not need to use the Amazon EC2 console, SDK or the AWS CLI. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
Question 10:
You have a web application running on an ec2 instance that needs to know the IP address that it is running on. How can the application get this information?
- Use Curl or Get command to http://169.254.169.254/latest/meta-data/ v
- Use Curl or Get command to http://169.254.169.254/latest/user-data
- Use API/SDK command get-host-address
- Use API/SDK command get-host-ip
New volumes are raw block devices, and you need to create a file system on them before you can mount and use them. See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
Question 11:
New EBS volumes are pre-formatted with a file system on them so you can easily mount and use them.
- True
- False v
See: http://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html
Question 12:
You have created an alias in IAM for your company called super-duper-co. What will be the login address for your IAM users?
- https://super-duper-co.iam.aws.amazon.com/console/
- https://super-duper-co.aws.iam.amazon.com/console/
- https://super-duper-co.signin.aws.amazon.com/console/
- None of the above
You can configure health checks, which are used to monitor the health of the registered instances so that the load balancer can send requests only to the healthy instances. See: http://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html
Question 13:
You have an ELB with multiple EC2 instances registered. One of the instances is unhealthy and not receiving traffic. After the instance becomes healthy again you will need to:
- Change the private IP address of the instance and register with ELB
- Change the public IP address of the instance and register with ELB
- Do nothing, the ELB will automatically direct traffic to the instance when it becomes healthy. v
- None of the above
Never store credentials in application code. Roles used to be the preferred option before the introduction of VPC S3 endpoints. See: https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/
Question 14:
You have an application running on an EC2 instance inside a VPC that requires access to Amazon S3. What is the best solution?
- Use AWS configure SDK command in your application to pass credentials via application code.
- Create an IAM role for the EC2 instance
- Create a VPC S3 endpoint v
- None of the above
A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same route table. See: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html
Question 15:
A VPC subnet can only be associated with one route table at a time, and you cannot associate multiple subnets with the same route table.
- True
- False v
If you are writing code that uses other resources, such as a graphics library for image processing, or you want to use the AWS CLI instead of the console, you need to first create the Lambda function deployment package, and then use the console or the CLI to upload the package. See: https://docs.aws.amazon.com/lambda/latest/dg/nodejs-create-deployment-pkg.html
Question 16:
You have a node.js lambda function that relies upon and external graphics library. What is the best way to include the external graphics library without consuming excessive lambda compute resources.
- Install the libraries with NPM before creating the deployment package v
- Run an Arbitrary Executable script in AWS Lambda to install the libraries
- Create a second lambda function to install the libraries
- Upload library to S3 and import when lambda function executed.
No such thing as an API deployment package or API snapshot. Stages are used to roll out updated APIs. Each stage will have its own URL as follows: https://api-id.execute-api.region.amazonaws.com/stage
Question 17:
You have created a JavaScript browser application that calls and API running on Amazon API Gateway. You have made a breaking change to your API and you want to minimise the impact on existing users of your application. You would like all users to be migrated over to the new API within one month. What can you do?
- Create a new API and use the new URL in your updated JavaScript application. Delete the old API after 1 month.
- Create a new stage and use the new URL in your updated JavaScript application. Delete the old stage after 1 month. v
- Create a new API deployment package and use the new URL in your updated JavaScript application. Delete the old deployment package after 1 month.
- Create a new stage and use the new URL in your updated JavaScript application. Create an API snapshot then delete the stage after 1 month.
See: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html
Question 18:
Your organisation would like to have clear separation of costs between departments. What is the best way to achieve this?
- Tag resources by department
- Tag resources by IAM group
- Tag resources by IAM role
- Create separate AWS accounts for departments and use consolidated billing. v
- None of the above
We recommend that you save access logs in a different bucket so that you can easily manage the logs. If you choose to save access logs in the source bucket, we recommend that you specify a prefix for all log object keys so that the object names begin with a common string and the log objects are easier to identify. When your source bucket and target bucket are the same bucket, additional logs are created for the logs that are written to the bucket. See: https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html#server-access-logging-overview
Question 19:
You have implemented server access logging on an S3 bucket. Your source and target buckets are the same. You are finding that your logs are significantly more then the actual objects being uploaded. What is happening?
- You have enabled S3 replication on the log entries.
- You did not select compression on the S3 logs.
- S3 is creating growing logs of logs. v
- You did not select compression on the S3 lifecycle policy
You can add an approval action to a stage in an AWS CodePipeline pipeline at the point where you want the pipeline to stop so someone can manually approve or reject the action. See: https://docs.aws.amazon.com/codepipeline/latest/userguide/approvals-action-add.html
Question 20:
You would like to implement an approval process before a stage is deployed on AWS codepipeline. How would you do this?
- Implement CloudTrail monitoring for the PipeLine
- Implement CloudWatch monitoring for the PipeLine
- Apply an IAM Role to the PipeLine
- Add an approval action to the stage v
You can detach an Amazon EBS volume from an instance explicitly or by terminating the instance. However, if the instance is running, you must first unmount the volume from the instance. If an EBS volume is the root device of an instance, you must stop the instance before you can detach the volume. When a volume with an AWS Marketplace product code is detached from an instance, the product code is no longer associated with the instance. See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-detaching-volume.html
Question 21:
You have an EBS volume which also the root device attached to a running EC2 instance. What do you need to do to enable you to detach it?
- Unmount the volume then detach.
- Stop the instance then detach. v
- Unmount volume, then stop the instance and then detach
- None of the above
You can make API requests directly or by using an integrated AWS service that makes API requests to AWS KMS on your behalf. The limit applies to both kinds of requests. You might store data in Amazon S3 using server-side encryption with AWS KMS (SSE-KMS). Each time you upload or download an S3 object that's encrypted, Amazon S3 makes a GenerateDataKey (for uploads) or Decrypt (for downloads) request to AWS KMS on your behalf. These requests count toward your limit, so AWS KMS throttles the requests. See: https://docs.aws.amazon.com/kms/latest/developerguide/limits.html#requests-per-second
Question 22:
You have a JavaScript application that is used to upload objects to Amazon S3 by hundreds of thousands of clients. You are using server side encryption with the AWS Key Management Service. You are finding that many requests are not working. What is going on?
- You have KMS key rotation implemented
- You have exceeded the KMS API call limit v
- The user STS token has expired
- There is a problem with the bucket permissions
If the front-end connection uses TCP or SSL, then your back-end connections can use either TCP or SSL. If the front-end connection uses HTTP or HTTPS, then your back-end connections can use either HTTP or HTTPS. See: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-listener-config.html
Question 23:
If the front-end connection of your Classic ELB uses HTTP or HTTPS, then your back-end connections can use ___________.
- TCP or SSL
- TCP, SSL, HTTP or HTTPS
- HTTP or HTTPS v
- None of the above
A bucket owner cannot grant permissions on objects it does not own. For example, a bucket policy granting object permissions applies only to objects owned by the bucket owner. However, the bucket owner, who pays the bills, can write a bucket policy to deny access to any objects in the bucket, regardless of who owns it. The bucket owner can also delete any objects in the bucket. See: http://docs.aws.amazon.com/AmazonS3/latest/dev/access-policy-alternatives-guidelines.html
Question 24:
You have given S3 bucket access to another AWS account. You are trying to change an object's permissions but can't. What do you need to do?
- Change the bucket ACL to public
- Change the bucket policy to public
- Ask the object owner to change permissions v
- None of the above
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources. See: http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
Question 25:
You have a HTML5 website with a custom domain name on S3. You have a public software library on another S3 bucket but your browser prevents it from loading. What do you need to do?
- create a public bucket policy
- enable CORS on the website bucket v
- create a public bucket ACL
- create a public object ACL
- None of the above
Long polling helps reduce your cost of using Amazon SQS by reducing the number of empty response. You can enable long polling using the AWS Management Console by setting a Receive Message Wait Time to a value greater than 0. See: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-long-polling.html
Question 26:
You have an application that is polling an SQS queue continuously and wasting resources when the queue is empty. What can you do to reduce the resource overhead?
- Implement a load balancer
- Implement a load balancer and autoscaling group of EC2 instances
- Implement a load balancer, autoscaling group of EC2 instances linked to a queue length CloudWatch alarm
- Increase ReceiveMessageWaitTimeSeconds v
- Increase queue visibility Timeout
- None of the above
See: https://docs.aws.amazon.com/lambda/latest/dg/lambda-app.html#lambda-app-deploy
Question 27:
You have created a NodeJS Lambda function that requires access to multiple third party packages and libraries. The function integrates with other AWS serverless services. You would like to deploy this application and be able to rollback any deployments that are not successful.
- Create a zip file containing your code and libraries. Upload the deployment package using the AWS CLI/SDKs CreateFunction.
- Create a zip file containing your code and libraries. Upload the deployment package using the Lambda console.
- Create a zip file containing your code and libraries. Upload the deployment package using the Lambda console or AWS CLI/SDKs CreateFunction. v
- Create a zip file containing your code and libraries. Upload the deployment package using the Serverless application model (SAM) console.
Amazon RDS uses the MariaDB, MySQL, and PostgreSQL (version 9.3.5 and later) DB engines' built-in replication functionality to create a special type of DB instance called a Read Replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the Read Replica. When a replica is promoted to master it no longer synchronizes with source DB but other instance still synchronize with source DB. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Promote
Question 28:
If you have multiple Read Replicas for a master DB Instance and you promote one of them, the remaining Read Replicas will still replicate from the older master DB Instance.
- True v
- False
When you update a stack, you submit changes, such as new input parameter values or an updated template. AWS CloudFormation compares the changes you submit with the current state of your stack and updates only the changed resources. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks.html
Question 29:
When you update a stack, you modify the original stack template then AWS CloudFormation :
- updates only the resources that you modified v
- updates all the resources defined in the template
- None of the above
When you rename a DB instance, the endpoint for the DB instance changes, because the URL includes the name you assigned to the DB instance. You should always redirect traffic from the old URL to the new one. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RenameInstance.html
Question 30:
When you rename a DB instance, the endpoint for the DB instance does not change.
- True
- False v
Once you version-enable a bucket, it can never return to an unversioned state. You can, however, suspend versioning on that bucket. https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Question 31:
Once you version-enable a bucket, it can never return to an unversioned state.
- True v
- False
All read replicas associated with a DB instance remain associated with that instance after it is renamed. For example, suppose you have a DB instance that serves your production database and the instance has several associated read replicas. If you rename the DB instance and then replace it in the production environment with a DB snapshot, the DB instance that you renamed will still have the read replicas associated with it. See: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RenameInstance.html
Question 32:
All read replicas associated with a DB instance remain associated with that instance after it is renamed.
- True v
- False
An application on the instance retrieves the security credentials provided by the role from the instance metadata item iam/security-credentials/role-name. The application is granted the permissions for the actions and resources that you've defined for the role through the security credentials associated with the role. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials
Question 33:
How can you check an IAM role for permissions to a Kinesis stream is associated to an EC2 instance?
- CLI command STSAssumeRole followed by describeStreams
- Check the EC2 instance metadata at iam/security-credentials/role-name v
- Check the Kinesis stream logs using the console
- SDK command STSAssumeRole followed by describeStreams
/tmp (local storage) is guaranteed to be available during the execution of your Lambda function. Lambda will reuse your function when possible, and when it does, the content of /tmp will be preserved along with any processes you had running when you previously exited. However, Lambda doesn't guarantee that a function invocation will be reused, so the contents of /tmp (along with the memory of any running processes) could disappear at any time. You should think of /tmp as a way to cache information that can be regenerated or for operations that require a local filesystem, but not as a permanent
Question 34:
You have a browser application hosted on Amazon S3. It is making requests to an AWS lambda function. Every time the lambda function is called you lose the session data on the lambda function. What is the best way to store the data used across multiple lambda functions.
- Store in lambda function localstorage
- Use AWS SQS
- Use Amazon Dynamodb v
- Use an Amazon Kinesis data stream
See: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html
Question 35:
You have created an e-commerce site using DynamoDB. When creating a primary key on a table which of the following would be the best attribute for the primary key?
- division_id where there are few divisions to many products
- user_id where there are many users to few products v
- product_id where there are many products to many users
- None of the above
Changes the visibility timeout of a specified message in a queue to a new value. The maximum allowed timeout value is 12 hours. See: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ChangeMessageVisibility.html
Question 36:
Using ChangeMessageVisibility from the AWS SQS API will do what?
- Changes the visibility timeout of a specified message in a queue to a new value. v
- Changes the message visibility from true to false.
- Deletes the message after a period of time.
- None of the above
To host your static website, you configure an Amazon S3 bucket for website hosting and then upload your website content to the bucket. The website is then available at the region-specific website endpoint of the bucket: .s3-website-.amazonaws.com See: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
Question 37:
You've enabled website hosting on a bucket named 'backspace.academy' in the us-east-1 (us standard region). Select the URL you'll receive from AWS as the URL for the bucket.
- backspace.academy.s3-website-us-east-1.amazonaws.com v
- backspace.academy.s3-website.amazonaws.com
- backspace.academy.us-east-1-s3-website.amazonaws.com
- backspace.academy.s3-website-us-east.amazonaws.com
Lambda by default can handle up to 1000 concurrent executions. Elasticache will not speed up writes it will only speed up read access. Increasing the size of the rds instance will increase its capacity to handle concurrent connections.
Question 38:
You have created a lambda function to insert information to an RDS database over 20 times per minute. You are finding that the execution time is excessive. How can you improve the performance?
- increase the compute capacity of the lambda function to enable more concurrent connections
- increase the memory of the lambda function to enable more concurrent connections
- increase the size of the rds instance v
- implement elasticache in front of the database.
The application must: - have the X-ray Daemon running on it and, - assume a role that has xray:PutTraceSegments and xray:PutTelemetryRecords permissions.
Question 39:
You are using AWS X-ray to record trace data for requests to your application running on EC2. Unfortunately the trace data is not appearing in the X-ray console. You are in the Sao Paulo region. What is the most probable cause?
- You do not have permission for x-ray console access
- the ec2 instance does not have a role with permissions to send trace segments or telemetry records v
- AWS X-ray does not support ec2 instances
- Sao Paulo region does not support AWS X-ray
See: http://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html
Question 40:
Parts of a multipart upload will not be completed until the 'complete' request has been called which puts all the parts of the file together.
- True v
- False
Deployment package size limits cannot be changed. Create multiple Lambda functions and coordinate using AWS Step Functions to reduce the package sizes. See: https://docs.aws.amazon.com/lambda/latest/dg/limits.html
Question 41:
You have created a lambda function that is failing when deployed due to the size of the deployment package zip file. What can you do?
- Request a limit increase from AWS
- Create multiple Lambda functions and coordinate using AWS Step Functions
- Upload as a tar file with higher compression
- Increase Lambda function memory allocation
See: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LSI.html
Question 42:
The hash key of the DynamoDB __________ is the same attribute as the hash key of the table. The range key can be any scalar table attribute.
- Local Secondary Index v
- Local Primary Index
- Global Secondary Index
- Global Primary Index
The DisableApiTermination attribute controls whether the instance can be terminated using the console, CLI, or API. By default, termination protection is disabled for your instance. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#Using_ChangingDisableAPITermination
Question 43:
The DisableConsoleTermination attribute controls whether the instance can be terminated using the console, CLI, or API. By default, termination protection is disabled.
- True
- False v
There is no such thing as requireMFA. Multi-factor authentication (MFA) increases security for your app by adding another authentication method, and not relying solely on user name and password. You can choose to use SMS text messages, or time-based one-time (TOTP) passwords as second factors in signing in your users. See: https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-mfa.html
Question 44:
You have developed a browser JavaScript application that uses the AWS software development kit. The application accesses sensitive data and you would like to implement Multi Factor authentication. How would you achieve this?
- Use IAM Multi Factor authentication (MFA)
- Use Cognito Multi Factor authentication (MFA) v
- Use requireMFA in the AWS SDK
- Use IAM.requireMFA in the AWS SDK
Packages the local artifacts (local paths) that your AWS CloudFormation template references. The command uploads local artifacts, such as source code for an AWS Lambda function or a Swagger file for an AWS API Gateway REST API, to an S3 bucket. The command returns a copy of your template, replacing references to local artifacts with the S3 location where the command uploaded the artifacts. After you package your template's artifacts, run the deploy command to deploy the returned template. See: https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html
Question 45:
You would like to deploy an AWS lambda function using the AWS CLI. Before deploying what needs to be done?
- Create a role for the AWS CLI with lambda permissions
- Package the local artefacts to S3 using cloudformation package CLI command v
- Package the local artefacts to Lambda using cloudformation package CLI command
- Package the local artefacts to SAM using sam package CLI command
In API Gateway, an API's method request can take a payload in a different format from the corresponding integration request payload, as required in the backend. Similarly, the backend may return an integration response payload different from the method response payload, as expected by the frontend. API Gateway lets you use mapping templates to map the payload from a method request to the corresponding integration request and from an integration response to the corresponding method response. See: https://docs.aws.amazon.com/apigateway/latest/developerguide/models-mappings.html
Question 46:
You would like to use Amazon API gateway to interface with an existing SOAP/XML backend. API Gateway will receive requests and forward them to the SOAP backend. How can you achieve this?
- Use API Gateway mapping templates to transform the data for the SOAP backend v
- Use API Gateway data translation to transform the data for the SOAP backend
- Use a Lambda function to transform the data for the SOAP backend
- Use an EC2 instance with a load balancer to transform the data for the SOAP backend.
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items. See: https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_BatchGetItem.html
Question 47:
A single DynamoDB BatchGetItem request can retrieve up to 16 MB of data, which can contain as many as 25 items.
- True
- False v
BucketAlreadyExists BucketNotEmpty http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html#RESTErrorResponses
Question 48:
The following error codes would have a HTTP Status Code 409
AccessDenied
BucketAlreadyExists
BucketNotEmpty
IncompleteBody
- True
- False v
A tag is a label that you or AWS assigns to an AWS resource. Each tag consists of a key and a value. A key can have more than one value. You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. See: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
Question 49:
You can use tags to organize your AWS bill to reflect your own cost structure.
- True v
- False
Scan operations proceed sequentially; however, for faster performance on a large table or secondary index, applications can request a parallel Scan operation by providing the Segment and TotalSegments parameters A parallel scan with a large number of workers can easily consume all of the provisioned throughput for the table or index being scanned. It is best to avoid such scans if the table or index is also incurring heavy read or write activity from other applications. To control the amount of data returned per request, use the Limit parameter. This can help prevent situations where one worke
Question 50:
You would like to increase the throughput of a table scan but still leave capacity for the day to day workload. How would you do this?
- use a sequential scan with rate-limit parameter.
- use a parallel scan with rate-limit parameter v
- use a query scan with rate-limit parameter
- Increase read capacity on a schedule.
100x2=200, 4kb (rounded up) read units required http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ProvisionedThroughput.html
Question 51:
Your items are 6KB in size and you want to have 100 strongly consistent reads per second. How many DynamoDB read capacity units do you need to provision?
- 100
- 200 v
- 300
- 600
See: http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
Question 52:
If you anticipate that your S3 workload will consistently exceed 100 PUT/LIST/DELETE requests per second or more than 300 GET requests per second, you should avoid sequential key names
- True v
- False
You can use the CreateQueue action to create a delay queue by setting the DelaySeconds attribute to any value between 0 and 900 (15 minutes). You can also change an existing queue into a delay queue using the SetQueueAttributes action to set the queue's DelaySeconds attribute. See: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-delay-queues.html
Question 53:
You can use CreateQueue to create an SQS delay queue by setting the DelaySeconds attribute to any value between 0 and 900 (15 minutes).
- True v
- False
IAM users must explicitly be given permissions to administer users or credentials for themselves or for other IAM users. See: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_delegate-permissions.html
Question 54:
IAM users do not need to be explicitly given permissions to administer credentials for themselves.
- True
- False v
Question 55:
Each queue starts with a default setting of 30 seconds for the visibility timeout.
- True v
- False
In addition to simple retries, each AWS SDK implements exponential backoff algorithm for better flow control. If you're not using an AWS SDK, you should retry original requests that receive server (5xx) or throttling errors. However, client errors (4xx) indicate that you need to revise the request to correct the problem before trying again. If the rate is still being exceeded then contact AWS to increase the limit. See: https://docs.aws.amazon.com/general/latest/gr/api-retries.html
Question 56:
You have developed an application that calls the Amazon CloudWatch API. Every now and again your application receives ThrottlingException HTTP Status Code: 400 errors when making GetMetricData calls. How can you fix this problem?
- Implement exponential backoff algorithm for retries v
- Use the GetBatchData API call
- Request a limit increase from AWS
- Increase CloudWatch IOPS
Question 57:
Your AWS CodeBuild project keeps failing to compile your code. How can you identify what is happening?
- Define a Cloudwatch event in your buildspec.yml file
- Enable Cloudtrail logging
- Enable Cloudwatch logs
- Check the build logs in the CodeBuild console v
You can work with tags using the AWS Management Console, the AWS CLI, and the Amazon EC2 API. See: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html
Question 58:
You can work with tags using the AWS Management Console, the Amazon EC2 command line interface (CLI), and the Amazon EC2 API.
- True v
- False
The demand is not continuous so it is best to back off and try again. If the demand was continuous then you would look at increasing capacity. See: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.MessagesAndCodes See: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff
Question 59:
You have a dynamodb table that keeps reporting many failed requests with a ProvisionedThroughputExceededException in Cloudwatch. The requests are not continuous but a number of times during the day for a few seconds. What is the best solution for reducing the errors?
- create a cloudwatch alarm to retry the failed request
- Implement exponential backoff and retry v
- Increase the provision capacity of the dynamodb table
- implement a secondary index
See: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html
Question 60:
__________________ returns the approximate number of SQS messages that are not timed-out and not deleted.
- NumberOfMessagesNotVisible
- ApproximateNumberOfMessagesNotVisible v
- ApproximateNumberOfMessages
- ApproximateNumberOfMessagesVisible
- None of the above
'IoT > AWS Certificate' 카테고리의 다른 글
Benchmark Assessment - Exam Prep: AWS Certified Solutions Architect - Associate (0) | 2022.04.28 |
---|---|
AWS Practitioner Certificate - Free Braindumps (0) | 2020.02.18 |
AWS Certified Cloud Practitioner - BackSpace Academy - Udemy course (0) | 2020.02.13 |
AWS Cloud Practitioner Essentials (Digital) (Korean) - 03 (0) | 2020.01.05 |
AWS Cloud Practitioner Essentials (Digital) (Korean) - 02 (0) | 2020.01.02 |
AWS Cloud Practitioner Essentials (Digital) (Korean) - 01 (0) | 2019.12.29 |
AWS Certified developer associate exam samples 2 (0) | 2018.02.15 |
AWS Certified developer associate exam samples (2) | 2018.01.26 |
[AWS Certificate] Developer - VPC memo (1) | 2017.11.29 |
[AWS Certificate] Developer - Route53 memo (0) | 2017.11.25 |