반응형
블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

카테고리


반응형

Coursera에서 제공하는 Exam Prep: AWS Certified Solutions Architect-Associate 코스를 다 끝내고 테스트 문제를 풀었다.

총 31문제 중 첫번째 시도에서는 64.51점을 맞췄다.

두번째 시도에서는 90.32점.

세번째 시도에서야 100점을 맞았다.

 

왜 맞았는지, 왜 틀렸는지 찬찬히 한번 더 살펴봐야겠다.

 

Try again once you are ready

Grade received 64.51%


2nd Try

Your grade

90.32

 

3rd try

Grade received 100%

 

To pass 80% or higher

 

 

Benchmark Assessment

 

1.

Question 1

A company's application allows users to upload image files to an Amazon S3 bucket. These files are accessed frequently for the first 30 days. After 30 days, these files are rarely accessed, but need to be durably stored and available immediately upon request. A solutions architect is tasked with configuring a lifecycle policy that minimizes the overall cost while meeting the application requirements. Which action will accomplish this?

4.1 Identify cost-effective storage solutions

1 / 1 point

 

Configure a lifecycle policy to move the files to S3 Glacier after 30 days.

 

Configure a lifecycle policy to move the files to S3 Glacier Deep Archive after 30 days.

 

Configure a lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.

 

Configure a lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.

 

정답 3번

Correct

Correct. Using a lifecycle policy to move data to S3 Standard-IA satisfies all application requirements and provides the lowest-cost option. To learn more about S3 Standard-IA, see: Amazon S3 Storage Classes

Glacieravailable immediately라는 requirement를 충족할 수 없다..
S3 One Zone-IAto be durably stored 라는 requirement를 충족할 수 없다.

 

 

2.

Question 2

A company needs to implement a secure data encryption solution to meet regulatory requirements. The solution must provide security and durability in generating, storing, and controlling cryptographic data keys. Which action should be taken to provide the MOST secure solution?

3.3 Select appropriate data security options

1 / 1 point

 

Use AWS Key Management Service (AWS KMS) to generate AWS KMS keys and data keys. Use AWS KMS key policies to control access to the KMS keys.

 

Use AWS Key Management Service (AWS KMS) to generate cryptographic keys and import the keys to AWS Certificate Manager. Use IAM policies to control access to the keys.

 

Use a third-party solution from AWS Marketplace to generate the cryptographic keys and store them on encrypted instance store volumes. Use IAM policies to control access to the encryption key APIs.

 

Use OpenSSL to generate the cryptographic keys and upload the keys to an Amazon S3 bucket with encryption enabled. Apply AWS Key Management Service (AWS KMS) key policies to control access to the keys.

 

정답 1번

Correct

Correct. AWS KMS with customer controlled KMS keys meets all the requirements. To learn more about AWS KMS, see: AWS Key Management Service

AWS KMS를 사용하면 손쉽게 암호화 키를 생성 및 관리하고 다양한 AWS 서비스와 애플리케이션에서의 사용을 제어할 수 있다.
다른 보기들은 AWS KMS의 일체화된 서비스를 사용하는 것보다 secure 하지 않다.

 

 

3.

Question 3

A startup company is looking for a solution to cost-effectively run and access microservices without the operational overhead of managing infrastructure. The solution needs to be able scale quickly to accommodate rapid changes in the volume of requests and protect against common DDoS attacks. What is the MOST cost-effective solution that meets these requirements?

4.2 Identify cost-effective compute and database services

0 / 1 point

 

Run the microservices in containers using AWS Elastic Beanstalk.

 

Run the microservices in AWS Lambda behind an Amazon API Gateway.

 

Run the microservices on Amazon EC2 instances in an Auto Scaling group.

 

Run the microservices in containers using Amazon Elastic Container Service (Amazon ECS) backed by EC2 instances.

Incorrect

Incorrect. Amazon ECS is a highly scalable, fast, container management service that you can use to run, stop, and manage Docker containers on a cluster. However, you must manage the underlying EC2 instances unless you use AWS Fargate. Also, cluster scaling might not be fast enough to handle rapid changes in request volume. To learn more about Amazon ECS, see: What is Amazon Elastic Container Service?
ECSDDoS 공격도 관리할 줄 알았는데 아닌가 보다.
without operational overhead of managing infrastructure 가 있는거 보니 AWS Lambda 가 맞는것 같다.
Microservice는 소프트웨어가 잘 정의된API를 통해 통신하는 소규모의 독립적인 서비스로 구성되어 있는 경우의 소프트웨어 개발을 위한 아키텍처 및 조직적 접근 방식임. 독립적인 소규모 팀에서 보유 함.

정답 2두번째 시도에 맞춤
Correct

Correct. Lambda is a compute service that you can use to run code without provisioning or managing servers. Lambda runs code only when needed. It is a cost-effective solution because there is no charge for idle resources. To learn more about Lambda, see: What is AWS Lambda?

 

4.

Question 4

A solutions architect needs to design a secure environment for AWS resources that are being deployed to Amazon EC2 instances in a VPC. The solution should support for a three-tier architecture consisting of web servers, application servers, and a database cluster. The VPC needs to allow resources in the web tier to be accessible from the internet with only the HTTPS protocol. Which combination of actions would meet these requirements? (Select TWO.)

3.2 Design secure application tiers

1 / 1 point

 

Attach Amazon API Gateway to the VPC. Create private subnets for the web, application, and database tiers.
Private subnet을 적용하면 외부 접근이 안되서 WebPublic을 해야 함.

 

Attach an internet gateway to the VPC. Create public subnets for the web tier. Create private subnets for the application and database tiers.

Correct

Correct. Only the web tier needs to be in public subnets. The application and database tiers should be in private subnets. To learn more about internet gateways, public subnets, and private subnets, see: VPCs and subnets

 

Attach a virtual private gateway to the VPC. Create public subnets for the web and application tiers. Create private subnets for the database tier.
application tierpublic을 적용할 필요가 없음. 비지니스 로직이 외부에 노출 될 우려가 있음

 

Create a web server security group that allows all traffic from the internet. Create an application server security group that allows requests from only the Amazon API Gateway on the application port. Create a database cluster security group that allows TCP connections from the application security group on the database port only.
Web serverHTTPS 만 허용되어야 하기 때문에 requirement를 만족하지 못함.

 

Create a web server security group that allows HTTPS requests from the internet. Create an application server security group that allows requests from the web security group only. Create a database cluster security group that allows TCP connections from the application security group on the database port only.

 

정답 2, 5번

 

Correct

Correct. Putting the web tier in public subnets allows for greater access to the resource while protecting it from traffic on unrequired ports. Restricting traffic to the application and database tiers helps protect them from accidental and malicious access. It also helps ensure that each tier is accessed only through secure communication with the previous tier. To learn more about securing traffic in a VPC, see: Security groups for your VPC

 

 

5.

Question 5

A solutions architect has been given a large number of video files to upload to an Amazon S3 bucket. The file sizes are 100–500 MB. The solutions architect also wants to easily resume failed upload attempts. How should the solutions architect perform the uploads in the LEAST amount of time?

2.2 Select high-performing and scalable storage solutions for a workload

1 / 1 point

 

Split each file into 5-MB parts. Upload the individual parts normally and use S3 multipart upload to merge the parts into a complete object.

 

Using the AWS CLI, copy individual objects into the S3 bucket with the aws s3 cp command.
CLI를 사용하면 자동으로 multiuploading 기능을 제공 함

 

From the Amazon S3 console, select the S3 bucket. Upload the S3 bucket, and drag and drop items into the bucket.

 

Upload the files with SFTP and the AWS Transfer Family.

 

정답 2번

Correct

Correct. It is a best practice to use aws s3 commands (such as aws s3 cp) for multipart uploads and downloads. These aws s3 commands automatically perform multipart uploading and downloading based on the file size. To learn more about using the AWS CLI to perform multipart uploads, see: How do I use the AWS CLI to perform a multipart upload of a file to Amazon S3?

6.

Question 6

A gaming company is experiencing exponential growth. On multiple occasions, customers have been unable to access resources. To keep up with the increased demand, Management is considering deploying a cloud-based solution. The company is looking for a solution that can match the on-premises resilience of multiple data centers, and is robust enough to withstand the increased growth activity. Which configuration should a Solutions Architect implement to deliver the desired results?

1.2 Design highly available and/or fault-tolerant architectures

1 / 1 point

 

A VPC configured with an ELB Application Load Balancer targeting an EC2 Auto Scaling group consisting of Amazon EC2 instances in one Availability Zone è One AZ로는 fault-tolerant 를 보장할 수 없다.

 

Multiple Amazon EC2 instances configured within peered VPCs across two Availability Zones

 

A VPC configured with an ELB Network Load Balancer targeting an EC2 Auto Scaling group consisting of Amazon EC2 instances spanning two Availability Zones
Network LB는 다량의 request를 처리할 수 있다. Load balances at the transport layer (TCP/UDP Layer-4)
Network LB can handle traffic bursts, retain the source IP of the client and use a fixed IP for the life of the load balancer.

 

A VPC configured with an ELB Application Load Balancer targeting an EC2 Auto Scaling group consisting of Amazon EC2 instances spanning two AWS Regions
è load balances at the application layer (HTTP/HTTPS), path based routing.

 

정답 3번

 

Correct

Correct. The Network Load Balancer can handle millions of requests per second, while maintaining ultra-low latency. Combined with an Auto Scaling group, the Network Load Balancer can handle volatile traffic patterns. Setting the Auto Scaling group targets across multiple Availability Zones will make this highly available. To learn more about automatic scaling, see: Configure an Application Load Balancer or Network Load Balancer using the Amazon EC2 Auto Scaling console

 

 

7.

Question 7

A Solutions Architect must secure the network traffic for two applications running on separate Amazon EC2 instances in the same subnet. The applications are called Application A and Application B. Application A requires that inbound HTTP requests be allowed and all other inbound traffic be blocked. Application B requires that inbound HTTPS traffic be allowed and all other inbound traffic be blocked, including HTTP traffic. What should the Solutions Architect use to meet these requirements?

3.2 Design secure application tiers

0 / 1 point

 

Configure the access with network access control lists (network ACLs).
ACL
subnet 단위에서 작용함. EC2 instance들이 같은 subnet에 있기 때문에 이를 사용할 수 없다.

 

Configure the access with security groups. è 이게 답인가?
SGEC2 instance 단위로 적용 됨. Allow 만 허용되고 Deny는 설정 안 됨.
security group acts as a virtual firewall, controlling the traffic that is allowed to reach and leave the resources that it is associated with. For example, after you associate a security group with an EC2 instance, it controls the inbound and outbound traffic for the instance.

 

Configure the network connectivity with VPC peering.
RegionVPC를 프라이빗 주소(IPv4, IPv6)를 사용하여 라우팅 해 줌

 

Configure the network connectivity with route tables. è 두번째 시도 답 -틀림
The route table contains existing routes with targets other than a network interface, Gateway Load Balancer endpoint, or the default local route. The route table contains existing routes to CIDR blocks outside of the ranges in your VPC. Route propagation is enabled for the route table.

 

 

Incorrect

Incorrect. Though network ACLs can allow and block traffic, they operate at the subnet boundary. They use one set of rules for all traffic that enters or leaves a particular subnet. Because the EC2 instances for both applications are in the same subnet, they would use the same network ACL. However, the question requires different security requirements for each application. To learn more about securing traffic as it enters or leaves a subnet, see: Network ACLs

Incorrect

Incorrect. A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed. It does not provide any ability to block traffic as requested for applications that are in the same subnet. To learn more about routing in Amazon VPC, see: Route tables for your VPC

두번째 시도 4번도 틀림

정답 2
Correct

Correct. A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. They support allow rules only, and they block all other traffic if a matching rule is not found. Security groups are applied specifically at the instance level, so different instances in the same subnet can have different rules applied to them. To learn more about securing traffic at the EC2 instance boundary, see: Security groups for your VPC

 

 

8.

Question 8

A data processing facility wants to move a group of Microsoft Windows servers to the AWS Cloud. Theses servers require access to a shared file system that can integrate with the facility's existing Active Directory infrastructure for file and folder permissions. The solution needs to provide seamless support for shared files with AWS and on-premises servers and allow the environment to be highly available. The chosen solution should provide added security by supporting encryption at rest and in transit. Which storage solution would meet these requirements?

4.1 Identify cost-effective storage solutions

0 / 1 point

 

An Amazon S3 File Gateway joined to the existing Active Directory domain

 

An Amazon FSx for the Windows File Server file system joined to the existing Active Directory domain
FSx for Windows File Server는 윈도우즈와 친화성이 매우 높다.

 

An Amazon Elastic File System (Amazon EFS) file system joined to an AWS Managed Microsoft AD domain
EFS
는 리눅스 기반임

 

An Amazon S3 bucket mounted on Amazon EC2 instances in multiple Availability Zones running Windows Server

 

Incorrect

Incorrect. Amazon EFS is a scalable, elastic file system for Linux based workloads. It is not supported for the Windows based instances. To learn more about Amazon EFS, see: What is Amazon Elastic File System?

 

정답 2두번째 시도에 맞춤
Correct

Correct. Amazon FSx provides a fully managed native Microsoft Windows file system so you can easily move your Windows-based applications that require file storage to AWS. With Amazon FSx, there are no upfront hardware or software costs. You pay for only the resources used, with no minimum commitments, setup costs, or additional fees. To learn more about Amazon FSx, see: What is FSx for Windows File Server? To learn more about Using Microsoft Windows file shares, see: Using Microsoft Windows file shares

 

9.

Question 9

A Solutions Architect notices an abnormal amount of network traffic coming from an Amazon EC2 instance. The traffic is determined to be malicious and the destination needs to be determined. What tool can the Solutions Architect use to identify the destination of the malicious network traffic?

3.2 Design secure application tiers

1 / 1 point

 

Enable AWS CloudTrail and filter the logs.

 

Enable VPC Flow Logs and filter the logs.

 

Consult the AWS Personal Health Dashboard.

 

Filter the logs from Amazon CloudWatch.

 

정답 2번

Correct

Correct. VPC Flow Logs is a feature that you can use to capture information about the IP traffic going to and from network interfaces in a VPC. To learn more about flow log basics, see: VPC Flow Logs

 

 

10.

Question 10

A company is deploying an environment for a new data processing application. This application will be frequently accessed by 20 different departments across the globe seeking to run analytics. The company plans to charge each department for the cost of that department's access. Which solution will meet these requirements with the LEAST effort?

2.2 Select high-performing and scalable storage solutions for a workload

1 / 1 point

 

Amazon Aurora with global databases. Each department will query a database in a different Region, and the Region is tagged in the billing console.

 

PostgreSQL on Amazon RDS, with read replicas for each department. Each department will query the read replica tagged for their team in the billing console.

 

Amazon Redshift, with clusters set up for each department. Each department will query the cluster tagged for their team in the billing console.

 

Amazon Athena with workgroups set up for each department. Each department will query via the workgroup tagged for their team in the billing console.

 

정답 4번

 

Correct

Correct. Amazon Athena can query data in Amazon S3, and workgroups are purpose-built for cost allocation. For more information about Amazon Athena workgroups, see: Using Workgroups to Control Query Access and Costs

 

 

11.

Question 11

A company is migrating its on-premises application to Amazon Web Services and refactoring its design. The design will consist of frontend Amazon EC2 instances that receive requests, backend EC2 instances that process the requests, and a message queuing service to address decoupling the application. The Solutions Architect has been informed that a key aspect of the application is that requests are processed in the order in which they are received. Which AWS service should the Solutions Architect to decouple the application?

1.3 Design decoupling mechanisms using AWS services

1 / 1 point

 

Amazon Simple Queue Service (Amazon SQS) standard queue

 

Amazon Simple Notification Service (Amazon SNS)

 

Amazon Simple Queue Service (Amazon SQS) FIFO queue

 

Amazon Kinesis

 

정답 3번

 

Correct

Correct. Amazon SQS FIFO (First In First Out) queues process messages in the order they are received. To learn more about Amazon SQS queue types, see: Amazon SQS features

 

 

12.

Question 12

An API receives a high volume of sensor data. The data is written to a queue before being processed to produce trend analysis and forecasting reports. With the current architecture, some data records are being received and processed more than once. How can a solutions architect modify the architecture to ensure that duplicate records are not processed?

1.3 Design decoupling mechanisms using AWS services

1 / 1 point

 

Configure the API to send the records to Amazon Kinesis Data Streams.

 

Configure the API to send the records to Amazon Kinesis Data Firehose.

 

Configure the API to send the records to Amazon Simple Notification Service (Amazon SNS).

 

Configure the API to send the records to an Amazon Simple Queue Service (Amazon SQS) FIFO queue.

 

정답 4번

 

Correct

Correct: The FIFO queue improves on and complements the standard queue. The most important features of this queue type are FIFO (First-In-First-Out) delivery and exactly-once processing. The order that messages are sent and received in is strictly preserved. A message is delivered once, and remains available until a consumer processes and deletes it. Duplicates are not introduced into the FIFO queue. To learn more about Amazon SQS and FIFO queues, see: Message ordering

 

 

13.

Question 13

After reviewing the cost optimization checks in AWS Trusted Advisor, a team finds that it has 10,000 Amazon Elastic Block Store (Amazon EBS) snapshots in its account that are more than 30 days old. The team has determined that it needs to implement better governance for the lifecycle of its resources. Which actions should the team take to automate the lifecycle management of the EBS snapshots with the LEAST effort? (Select TWO.)

4.1 Identify cost-effective storage solutions

0 / 1 point

 

Create and schedule a backup plan with AWS Backup. è 이게 답인것 같음
AWS Backup 사용하면 AWS 서비스 하이브리드 워크로드에서 데이터 보호를 중앙 집중화하고 자동화할 있습니다. AWS Backup 정책을 기반으로 대규모 데이터 보호를 간편하고 비용 효율적으로 수행할 있는 완전관리형 서비스입니다.
Correct

Correct. The team wants to automate the lifecycle management of EBS snapshots. AWS Backup is a centralized backup service that automates backup processes for application data across AWS services in the AWS Cloud. It is designed to help you meet business and regulatory backup compliance requirements. AWS Backup provides a central place where you can configure and audit the AWS resources that you want to back up. You can also automate backup scheduling, set retention policies, and monitor all recent backup and restore activity. To learn more, see: What is AWS Backup?

 

 

Copy the EBS snapshots to Amazon S3, and then create lifecycle configurations in the S3 bucket.
좀 더 간단한 방법이 있다.

This should not be selected

Incorrect. Though this solution meets the technical requirement, it does not meet the requirement for the least effort. To copy EBS snapshots and set up lifecycle policies on the S3 bucket, the team would need to provide manual effort or create scripts that would need to be hosted somewhere. To learn more, see: Copy an Amazon EBS snapshot

 

Use Amazon Data Lifecycle Manager (Amazon DLM).

Correct

Correct. With Amazon DLM, you can manage the lifecycle of your AWS resources through lifecycle policies. Lifecycle policies automate operations on specified resources. The team requires lifecycle management for EBS snapshots, and Amazon DLM supports EBS volumes and snapshots. To learn more about Amazon DLM, see: Amazon Data Lifecycle Manager

 

Use a scheduled event in Amazon EventBridge (Amazon CloudWatch Events) and invoke AWS Step Functions to manage the snapshots.
Amazon EventBridge 자체 애플리케이션, 통합 Software-as-a-Service(SaaS) 애플리케이션 AWS 서비스에서 생성된 이벤트를 사용하여 이벤트 기반 애플리케이션을 대규모로 손쉽게 구축할 있는 서버리스 이벤트 버스입니다.

 

Schedule and run backups in AWS Systems Manager.

정답 1,3 – 두번째 시도에 맞춤

 

 

14.

Question 14

A company is deploying a production portal application on AWS. The database tier runs on a MySQL database. The company requires a highly available database solution that maximizes ease of management. How can the company meet these requirements?

1.2 Design highly available and/or fault-tolerant architectures

1 / 1 point

 

Deploy the database on multiple Amazon EC2 instances that are backed by Amazon Elastic Block Store (Amazon EBS) across multiple Availability Zones. Schedule periodic EBS snapshots.

 

Use Amazon RDS with a Multi-AZ deployment. Schedule periodic database snapshots.

 

Use Amazon RDS with a Single-AZ deployment. Schedule periodic database snapshots.

 

Use Amazon DynamoDB with an Amazon DynamoDB Accelerator (DAX) cluster. Create periodic on-demand backups.

 

정답 2번

 

Correct

Correct. Amazon RDS with a Multi-AZ deployment provides automatic failover with minimum manual intervention and it is highly available. To learn more, see: High availability (Multi-AZ) for Amazon RDS

 

 

15.

Question 15

A company requires operating system permissions on a relational database server. What should a solutions architect suggest as a configuration for a highly available database architecture?

1.2 Design highly available and/or fault-tolerant architectures

0 / 1 point

 

Multiple Amazon EC2 instances in a database replication configuration that uses two Availability Zones è 이게 답인가?

 

A database installed on a single Amazon EC2 instance in an Availability Zone

 

Amazon RDS in a Multi-AZ configuration with Provisioned IOPS

 

Multiple Amazon EC2 instances in a replication configuration that uses a placement group

Incorrect

Incorrect. This solution meets the requirement for high availability, but it does not provide access to the operating system. To learn more about when to use EC2 instances, see: Amazon EC2 for Oracle - When to choose Amazon EC2

정답 1 – 두번째 시도에 맞춤
Correct

Correct. EC2 instances allow access to the operating system. In addition, spanning two Availability Zones helps ensure high availability. To learn more about best practices for databases, see: Web Application Hosting in the AWS Cloud

 

 

16.

Question 16

A company has developed an application that processes photos and videos. When users upload photos and videos, a job processes the files. The job can take up to 1 hour to process long videos. The company is using Amazon EC2 On-Demand Instances to run web servers and processing jobs. The web layer and the processing layer have instances that run in an Auto Scaling group behind an Application Load Balancer. During peak hours, users report that the application is slow and that the application does not process some requests at all. During evening hours, the systems are idle. What should a solutions architect do so that the application will process all jobs in the MOST cost-effective manner?

2.1 Identify elastic and scalable compute solutions for a workload

1 / 1 point

 

Use a larger instance size in the Auto Scaling groups of the web layer and the processing layer.

 

Use Spot Instances for the Auto Scaling groups of the web layer and the processing layer.

 

Use an Amazon Simple Queue Service (Amazon SQS) standard queue between the web layer and the processing layer. Use a custom queue metric to scale the Auto Scaling group in the processing layer.

 

Use AWS Lambda functions instead of EC2 instances and Auto Scaling groups. Increase the service quota so that sufficient concurrent functions can run at the same time.

 

정답 3번

 

Correct

Correct. The Auto Scaling group can scale in response to changes in system load in an SQS queue. Even if the Auto Scaling group is at its maximum capacity, jobs will be saved in the queue and they will be processed when compute resources become available. To learn more, see: Scaling based on Amazon SQS

 

 

17.

Question 17

A company is developing an application that runs on Amazon EC2 instances in a private subnet. The EC2 instances use a NAT gateway to access the internet. A solutions architect must provide a secure option so that developers can log in to the instances. Which solution meets these requirements MOST cost-effectively?

4.3 Design cost-optimized network architectures

0 / 1 point

 

Configure AWS Systems Manager Session Manager for the EC2 instances to enable login. è 이게 답인가?

 

Configure a bastion host in a public subnet to log in to the EC2 instances in a private subnet.

 

Use the existing NAT gateway to log in to the EC2 instances in a private subnet. è 두번째 시도 답
A NAT gateway is a Network Address Translation (NAT) service. You can use a NAT gateway so that instances in a private subnet can connect to services outside your VPC but external services cannot initiate a connection with those instances.

 

Configure AWS Site-to-Site VPN to log in directly to the EC2 instances.

Incorrect

Incorrect. Bastion hosts solve the functional requirement, but they increase costs because one or more instances would be required. To learn more, see: AWS Quick Starts - Linux Bastion Hosts on AWS

Incorrect

Incorrect. You cannot use NAT gateways to log in to EC2 instances because NAT gateways are gateways that handle only outbound traffic. To learn more, see: NAT gateways

두번째 시도에 3틀림

정답 1
Correct

Correct. Session Manager provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. There is no additional charge for accessing EC2 instances by using Session Manager. To learn more about Session Manager, see: AWS Systems Manager Session Manager To learn more about Session Manager pricing, see: AWS Systems Manager pricing

 

 

18.

Question 18

A company is using an Amazon S3 bucket to store archived data for audits. The company needs long-term storage for the data. The data is rarely accessed and must be available for retrieval the next business day. After a quarterly review, the company wants to reduce the storage cost for the S3 bucket. A solutions architect must recommend the most cost-effective solution to store the archived data. Which solution will meet these requirements?

4.1 Identify cost-effective storage solutions

1 / 1 point

 

Store the data on an Amazon EC2 instance that uses Amazon Elastic Block Store (Amazon EBS).

 

Use an S3 Lifecycle configuration rule to move the data to S3 Standard-Infrequent Access (S3 Standard-IA).

 

Store the data in S3 Glacier.

 

Store the data in another S3 bucket in a different AWS Region.

 

정답 3번

Correct

Correct. Out of these options, S3 Glacier is the most cost-effective solution. S3 Glacier is a good fit for archival data that does not need to be frequently accessed or modified. For more information about S3 Glacier, see: What Is S3 Glacier? To learn more about retrieval options for S3 Glacier, see: Retrieving S3 Glacier Archives

 

 

19.

Question 19

A solutions architect must create a disaster recovery (DR) solution for a company's business-critical applications. The DR site must reside in a different AWS Region than the primary site. The solution requires a recovery point objective (RPO) in seconds and a recovery time objective (RTO) in minutes. The solution also requires the deployment of a completely functional, but scaled-down version of the applications. Which DR strategy will meet these requirements?

1.2 Design highly available and/or fault-tolerant architectures

0 / 1 point

 

Multi-site active-active

 

Backup and restore

 

Pilot light

 

Warm standby è 이게 답인 것 같음

 

Incorrect

Incorrect. Multi-site active-active has an RPO and an RTO in real time and is considered a hot standby. Though this strategy will meet the RPO and RTO requirements, it is not a scaled down version of the applications (a stated requirement), and it will be more expensive than other options. To learn more about various DR strategies, see: Plan for Disaster Recovery (DR) - Use defined recovery strategies to meet the recovery objectives

 

정답 4두번째 시도에 맞춤
Correct

Correct. With warm standby (fully working at low capacity), all components run at a low capacity. The RPO is in seconds, and the RTO is in minutes. To learn more about various DR strategies, see: Plan for Disaster Recovery (DR) - Use defined recovery strategies to meet the recovery objectives

 

 

20.

Question 20

A financial services company is migrating its multi-tier web application to AWS. The application architecture consists of a fleet of web servers, application servers, and an Oracle database. The company must have full control over the database's underlying operating system, and the database must be highly available. Which approach should a solutions architect use for the database tier to meet these requirements?

1.2 Design highly available and/or fault-tolerant architectures

1 / 1 point

 

Migrate the database to an Amazon RDS for Oracle DB Single-AZ DB instance.

 

Migrate the database to an Amazon RDS for Oracle Multi-AZ DB instance.

 

Migrate to Amazon EC2 instances in two Availability Zones. Install Oracle Database and configure the instances to operate as a cluster.

 

Migrate to Amazon EC2 instances in a single Availability Zone. Install Oracle Database and configure the instances to operate as a cluster.

 

정답 3번

 

Correct

Correct. This solution provides the company with full control of the database operating system. The solution also provides high availability. To learn more about when Amazon EC2 is a good option, see: Amazon EC2 for Oracle

 

 

21.

Question 21

A hospital client is migrating from another cloud provider to AWS and is looking for advice on modernizing as they migrate. They have containerized applications that run on tablets. During spikes caused by increases in patient visits, the communications from the applications to the central database occasionally fail. As a result, the client currently has the applications try to write to the central database once, and if that write fails, it writes to a dedicated application PostgreSQL database run by the hospital IT team on premises. Each of those PostgreSQL databases then sends batch information on to the central database. The client is asking for recommendations for migrating or refactoring the database write process to decrease operational overhead. What should the solutions architect recommend? (Select TWO.)

4.2 Identify cost-effective compute and database services

1 / 1 point

 

Migrate the containerized applications to AWS Fargate.

 

Migrate the local databases to Aurora Serverless for PostgreSQL.

Correct

Correct. PostgreSQL has been turned into a kind of messaging service (holding all of the data until the batch job runs), and that is better handled by a queuing service. However, moving to Aurora Serverless will still decrease overhead for running the database, and it is a valid answer. To learn more, see: Amazon Aurora Serverless

 

Migrate the PostgreSQL databases to an RDS instance with a read replica that replaces each of the local databases.

 

Refactor the applications to use Amazon Simple Queue Service and eliminate the local PostgreSQL databases.

Correct

Correct. The client can decouple the messaging aspect of the application and remove the databases (which are effectively a workaround messaging service). To learn more about, see: How Amazon SQS works

 

Refactor the central database to add an Amazon ElastiCache lazy loading cache in front of the database.

 

정답 2,4 번

 

 

22.

Question 22

A large international company has a management account in AWS Organizations, and over 50 individual accounts for each country they operate in. Each of the country accounts has least four VPCs set up for functional divisions. There is a high amount of trust across the accounts, and communication among all of the VPCs should be allowed. Each of the individual VPCs throughout the entire global organization will need to access an account and VPC that provide shared services to all the other accounts. How can the member accounts access the shared services VPC with the LEAST operational overhead?

2.3 Select high-performing networking solutions for a workload

1 / 1 point

 

Create an Application Load Balancer, with a target of the private IP address of the shared services VPC. Add a Certification Authority Authorization (CAA) record for the Application Load Balancer to Amazon Route 53. Point all requests for shared services in the routing tables of the VPCs to the CAA record.

 

Create a peering connection between each of the VPCs and the shared services VPC.

 

Create a Network Load Balancer across the Availability Zones in the shared services VPC. Create service consumer roles in IAM, and set endpoint connection acceptance to automatically accept. Create consumer endpoints in each division VPC and point to the Network Load Balancer.

 

Create a VPN connection between each of the VPCs and the shared service VPC.

 

정답 3번

 

Correct

Correct. This solution provides the general flow of how an AWS PrivateLink connection is established. To learn more, see: Interface VPC endpoints (AWS PrivateLink)

 

 

23.

Question 23

A SysOps administrator is looking into a way to automate the deployment of new SSL/TLS certificates to their web servers, and a centralized way to track and manage the deployed certificates. Which AWS service can the administrator use to fulfill the above-mentioned needs?

3.2 Design secure application tiers

1 / 1 point

 

AWS Key Management Service

 

AWS Certificate Manager

 

Configure AWS Systems Manager Run Command

 

AWS Systems Manager Parameter Store

 

정답 2번

 

Correct

Correct. AWS Certificate Manager (ACM) is a service that you can use to provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal, connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the internet, in addition to resources on private networks. ACM reduces the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates. To learn more, see: AWS Certificate Manager

 

 

24.

Question 24

A client has created a website (www.example.com), with an Application Load Balancer in a public subnet. The load balancer targets an application hosted on EC2 instances in private subnets, which rely on an Amazon Aurora PostgreSQL-Compatible Edition DB instance in separate private subnets. When testing the website, static content from the EC2 instance is displayed, but any content driven by database queries fails to load. What should the administrator check?

1.1 Design a multi-tier architecture solution

0 / 1 point

 

Check the Amazon Route 53 CNAME record to ensure that www.example.com points to the top-level domain (example.com).

 

Check the network access control list (network ACL) of the application subnets for an outbound allow statement.
è 두번째 시도 답 틀림

 

Check that the route table for the database subnets includes a default route to the internet gateway for the VPC.
è 첫번째 시도 답 틀림

 

Check if the security group of the database subnet allows inbound traffic from the EC2 subnets. è 이게 답인가?

Incorrect

Incorrect. The database should be interacting with the EC2 subnet, which should return information to the Application Load Balancer. Providing access to the internet gateway could make the database subnet public instead of private. To learn more, see: Internet gateways

Incorrect

Incorrect. The EC2 instances are able to return information to the Application Load Balancer and out to the browser, so the network ACL is not blocking anything at the VPC level. To learn more, see: Security Groups and Network Access Control Lists (Network ACLs) (BP5)

두번째 시도 2틀림

정답 4
Correct. The database security group is likely not configured for inbound traffic from the EC2 layer. To learn more, see: Security Groups and Network Access Control Lists (Network ACLs) (BP5)

 

 

25.

Question 25

A solutions architect has been tasked with designing a three-tier application for deployment in AWS. There will be a web tier as the frontend, a backend application tier for data processing, and a database that will be hosted on Amazon RDS. The application frontend will be distributed to end users by CloudFront. Following best practices, it is decided that there should not be any point-to-point dependencies between the different layers of the infrastructure. How many Elastic Load Balancing load balancers should the architect deploy in the architecture so that this application's design follows best practices?

1.1 Design a multi-tier architecture solution

0 / 1 point

 

Zero. Use the load balancer that is automatically enabled when CloudFront is deployed.

 

One load balancer. This load balancer would be between the web tier and the application tier.

 

Two load balancers. One public load balancer would direct traffic to the web tier, and one private load balancer would direct traffic to the application tier. è 이게 답인가?

 

Three load balancers. One public load balancer would direct traffic to the web tier. One private load balancer would direct traffic to the application tier. Another private load balancer would direct traffic to the Amazon RDS database.

Incorrect

Incorrect. Though deploying one load balancer is better than deploying none, the application might experience reliability issues between the tiers that do not have a load balancer in place. To learn more about best practices for deploying a web hosting environment, see: An AWS Cloud architecture for web hosting

 

정답 3두번째 시도에 맞춤
Correct

Correct.One load balancer will be deployed between CloudFront and the web tier. Another load balancer would be deployed between the web tier and the application tier. To learn more about best practices for deploying a web hosting environment, see: An AWS Cloud architecture for web hosting

 

 

26.

Question 26

The CIO of a company is concerned about the security of the account root user of their AWS account. How can the CIO ensure that the AWS account follows the best practices for logging in securely? (Select TWO.)

3.1 Design secure access to AWS resources

1 / 1 point

 

Enforce the use of an access key ID and secret access key for the account root user logins.

 

Enforce the use of MFA for the account root user logins.

Correct

Correct. For increased security, we recommend that you configure multi-factor authentication (MFA) to help protect your AWS resources. You can enable MFA for IAM users or the AWS account root user. When you enable MFA for the root user, it affects only the root user credentials. IAM users in the account are distinct identities with their own credentials, and each identity has its own MFA configuration. To learn more about using MFA for accounts in AWS Organizations, see: Best practices for member accounts To learn more about enabling MFA for the account root user, see: Using multi-factor authentication (MFA) in AWS

 

Enforce the account root user to assume a role to access the root user's own resources.

 

Enforce the use of complex passwords for member account root user logins.

Correct

Correct. The security of your account root user depends on the strength of its password. We recommend that you use a password that is long, complex, and not used anywhere else. To learn more about using complex passwords for accounts in AWS Organizations, see: Best practices for member accounts

 

Enforce the deletion of the AWS account so that it cannot be used.

 

정답 2,4번

 

 

27.

Question 27

A Solutions Architect has been tasked with creating a data store location that will be able to handle different file formats of unknown sizes. It is required that this data be highly available and protected from being accidentally deleted. What solution meets the requirements and is the MOST cost-effective?

3.3 Select appropriate data security options

1 / 1 point

 

Deploy an Amazon S3 bucket and enable Cross-Region Replication.

 

Deploy an Amazon DynamoDB table and enable Global Tables.

 

Deploy an Amazon S3 bucket and enable Object Versioning.

 

Deploy a database using Amazon RDS and configure a Multi-AZ deployment for that database.

 

정답 3번

 

Correct

Correct. Versioning-enabled buckets can help you recover objects from accidental deletion or overwrite. For example, if you delete an object, Amazon S3 inserts a delete marker instead of removing the object permanently. The delete marker becomes the current object version. If you overwrite an object, it results in a new object version in the bucket. A user can always restore the previous version. To learn more about object versioning, see: Using versioning in S3 buckets

 

 

28.

Question 28

An organization is planning to migrate from an on-premises data center to an AWS environment that spans multiple Availability Zones. A migration engineer has been tasked to plan how to transfer the home directories and other shared network attached storage from the data center to AWS. The migration design should support connections from multiple Amazon EC2 instances running the Linux operating system to this common shared storage platform. What storage option best fits their design?

1.4 Choose appropriate resilient storage

1 / 1 point

 

Transfer the files to Amazon S3 and access that data from the EC2 instances.

 

Transfer the files to the EC2 Instance Store attached to the EC2 instances.

 

Transfer the files to Amazon EFS and mount that file system to the EC2 instances.

 

Transfer the files to one EBS volume and mount that volume to the EC2 instances.

 

정답 3번

 

Correct

Correct. Amazon EFS is well suited to support a broad spectrum of use cases from home directories to business-critical applications. Amazon EFS is designed to provide massively parallel shared access to thousands of EC2 instances. To learn more, see: Amazon Elastic File System

 

 

29.

Question 29

A company is designing a human genome application using multiple Amazon EC2 Linux instances. The high performance computing (HPC) application requires low latency and high performance network communication between the instances. Which solution provides the LOWEST latency between the instances?

1.1 Design a multi-tier architecture solution

0 / 1 point

 

Launch the EC2 instances in a cluster placement group. è 이게 답인가?

 

Launch the EC2 instances in a spread placement group.

 

Launch the EC2 instances in an Auto Scaling group spanning multiple Regions.

 

Launch the EC2 instances in an Auto Scaling group spanning multiple Availability Zones within a Region.

 

Incorrect

Incorrect. Because a HPC platform would require packing instances close together, instances that span Availability Zones would not provide the lowest network latency. To learn more, see: What is Amazon EC2 Auto Scaling?

 

정답 1두번째 시도에 맞춤
Correct

Correct. In an EC2 cluster placement group, instances are physically close together inside an Availability Zone. With this strategy, workloads can achieve the low-latency network performance that is needed for tightly coupled, node-to-node communication that is typical of HPC applications. To learn more, see: Placement groups

 

 

30.

Question 30

A company has a web application in which customers can log in and read near-real-time status updates about their orders. The company hosts the application on Amazon EC2 instances and is expanding the application from the eu-west-1 Region into the us-east-1 Region. The application relies on an Amazon RDS for MySQL database. The company already has provisioned the necessary EC2 instances in the new Region. The company needs to deploy the application in us-east-1 with the least possible change to the application. The company also needs fast, local database queries in both Regions. Which modification of the database will meet these requirements?

2.4 Choose high-performing database solutions for a workload.

1 / 1 point

 

Migrate the RDS database to an Amazon Aurora global database. Add a secondary cluster in us-east-1.

 

Migrate the RDS database to an Amazon Aurora Serverless database. Configure automatic scaling in us-east-1.

 

Migrate the RDS database to an Amazon DynamoDB table. Create global tables for us-east-1.

 

Place an accelerator from AWS Global Accelerator in front of the RDS database to reduce the network latency from us-east-1.

 

정답 1번

 

Correct

Correct. This solution meets the requirements, and is designed for a replica latency of approximately 1 second. By using the global database, users receive a low-read latency, with writes occurring on the primary database cluster in eu-west-1. The current application can continue to use existing code that points to the local Aurora instance. To learn more, see: Using Amazon Aurora global databases

 

 

31.

Question 31

A company is building a distributed application, which will send sensor IoT data-- including weather conditions and wind speed from wind turbines--to the AWS Cloud for further processing. Because the nature of the data is spiky, the application needs to be able to scale. It is important to store the streaming data in a key-value database and then send it over to a centralized data lake, where it can be transformed, analyzed, and combined with diverse organizational datasets to derive meaningful insights and make predictions. Which combination of solutions would accomplish the business need with minimal operational overhead? (Select TWO.)

2.4 Choose high-performing database solutions for a workload.

0 / 1 point

 

Configure Amazon Kinesis to deliver streaming data to an Amazon S3 data lake. è 이게 답인가?
Correct

Correct. Kinesis can send streaming data to an Amazon S3 data lake. To learn more, see: Build a data lake using Amazon Kinesis Data Streams for Amazon DynamoDB and Apache Hudi

 

Use Amazon DocumentDB to store IoT sensor data.

 

Write AWS Lambda functions to deliver streaming data to Amazon S3.

 

Use Amazon DynamoDB to store the IoT sensor data, and enable DynamoDB Streams.

Correct

Correct. DynamoDB Streams can be used to start Lambda functions. Lambda could then be used to send an Amazon SNS notification, or take corrective measures if the threshold is breached. To learn more about DynamoDB Streams, see: Change Data Capture for DynamoDB Streams To learn more about use cases for DynamoDB Streams, see: DynamoDB Streams Use Cases and Design Patterns

 

Use Amazon Kinesis to deliver streaming data to Amazon Redshift, and enable Amazon Redshift Spectrum.

This should not be selected

Incorrect. Amazon Kinesis Data Firehose can deliver streaming data to Amazon Redshift. However, S3 is better choice for a data lake where data can be transformed, analyzed, and combined with diverse organizational datasets to derive meaningful insights and make predictions.

 

정답 1,4두번째 시도에 맞춤

 

반응형


반응형

나는 6월달부터 열린 AWS Deepracer Virtual league 부터 참가했다.

그 경기는 Virtual Race #2 였다.

처음엔 20.695초를 기록해 104등을 차지 했다.

오늘 끝난 Virtual Race #5 에서는 9.976초를 기록해 39등을 차지 했다.

 

Lap Time은 꾸준히 발전해 목표했던 9.976초를 기록했다.

 

반면에 순위는 거의 변동이 없다.

참가자 모두가 비슷하게 실력이 향상되고 있는 것 같다.

 

최고 기록은 첨부터 9초대였고 3회때부터는 8초대를 기록하고 있다.

참가자 수는 4회때부터 1300명이 넘게 참가하고 있다.

 

6회 Toronto Turnpike 경기에는 아마존이 1등 상금을 1300달러로 올렸다.

이번엔 좀 더 많이 참가 할지 기대 된다.

 

1등을 하려면 8초대를 끊어야 할 것 같은데... 쉽지는 않을 것 같다.

 

지금까지 2회부터 5회 까지의 경기에 참가하면서 자율주행 모델을 훈련시키는 방법을 조금씩 변화를 주었었다.

그 결과 20초대에서 9초대로 기록을 향상 시킬 수 있었다.

 

각 Virtual 경기에 참가시켰던 AI 모델을 어떻게 훈련 시켰는지 기억을 더듬어 정리해 보겠다.

 

혹시 AWS DeepRacer 대회에 참가하고 있거나 참가할 계획인 분들에게 도움이 됐으면 좋겠다.

 

1. Virtual Race #2 Kumo Torraku

첫 대회에서는 모델을 여러 트랙에서 훈련 시켰다.

가장 간단한 직선 트랙에서 훈련을 시작해서 Oval Track, Bowtie Track, 그리고 London Loop 트랙을 거쳐서 대회에 사용됐단 트랙인 Kumo Torakku트랙에서 훈련 시켰다.

 

이 대회에서 Action Space와 Reward Function은 어떻게 사용했는지는 이전 글에서 정리 했으니 참고 바람.

https://coronasdk.tistory.com/1007?category=816561

 

AWS Deepracer Virtual Race 최초 참가 경험 정리

6월에 열린 아마존 AWS Deepracer Virtual Race #2 에서 20.695초 104등을 차지했다. 총 참가자 수는 572명이다. 나는 Sofia와 Dalyo 두 종류의 모델을 훈련 시키고 있다. 이번 대회에서는 주로 Sofia를 출전 시..

coronasdk.tistory.com

이 대회는 그야말로 정석대로 한 것 같다.

가장 간단한 트랙에서 훈련을 시작해서 점차 난이도 높은 트랙에서 모델을 훈련 시켰다.

그리고 기록 단축은 Reward Function으로 조절하려고 노력했다.

 

기록은 위에 적은 대로 20.695초로 104등을 차지했다.

이때의 심정은 제발 20초대만 돌파하고 싶었다.

 

남들은 10초대 초반이나 심지어 9초대의 기록을 보이고 있었는데 나는 20초대를 깨자가 목표였다.

하지만 잘 되지 않아서 거의 절망적이었다.

 

이 20초대는 두번째 대회에 참가하면서 쉽게 깨졌다.

 

 

2. Virtual Race #3 Empire City Circuit

 

 

이 대회가 열렸던 7월에는 New York 에서 Offline 대회가 열렸다.

훈련시킨 모델을 직접 자동차에 upload 시켜서 진짜 트랙에서 운행 시키는 경험을 할 수 있었다.

사는 곳이 Florida여서 가깝지는 않지만 그래도 New York 대회가 가장 가까운 곳이어서 이틀 휴가를 내고 참가하고 왔다.

 

이 대회의 참가 후 소감은 아래 글에서 볼 수 있다.

https://coronasdk.tistory.com/1010?category=816561

 

AWS DeepRacer League - New York을 다녀와서

지난 7월 11일 목요일 뉴욕에서 열렸던 Amazon New York Summit 에 다녀왔다. 거기서 열리는 AWS DeepRacer League 에 참여하기 위해서 이다. 목요일 금요일 이틀간 휴가를 내고 다녀왔다. 플로리다 올란도에서..

coronasdk.tistory.com

Offline 대회에 참가하고 난 후 왜 그런지 알 수는 없지만 의욕과 열정이 배가 되는 것 같았다.

Virtual 대회와 실제 Offline 대회의 차이점도 느낄 수 있었고 직접 최고 실력자들의 실력을 눈으로 보고 간단히 인사도 나누고 한 것이 영향을 미친 것 같았다.

 

이 대회를 다녀오고 난 후에는 해당 Track에서만 훈련 시켰다.

 

여러 트랙에서 두루 훈련을 시켜서 어떠한 트랙에서도 제대로 자율 주행을 할 수 있도록 하는 것이 이상적이겠지만..

그러기에는 시간이 너무 많이 걸릴 것 같았다.

훈련 시키는 만큼 돈도 내야 되기 때문에 더 효율적인 방법을 써야겠다고 생각했다.

 

그래서 Empire City 트랙에서만 계속 훈련을 시켰다.

결과는 13초 505를 기록해 39등을 했다.

 

지난 대회에서 그렇게 넘고 싶었던 20초대 벽을 훌쩍 뛰어 넘었다.

 

이 대회를 경험하고 나서 쓴 글은 여기서 확인 할 수 있다.

 

https://coronasdk.tistory.com/1011?category=816561

 

AWS Deepracer Virtual Circuit The Empire City 2019/7

2019 년 7월 Virtual Circuit인 The Empire City Circuit에서 내 모델이 48등과 49등을 했다. 야호!!!!!! 지난달 까지 20초대 이내로 들어가는 게 목표였는데 이 기록을 훌쩍 넘어서 13초대를 기록 했다. 목표 초..

coronasdk.tistory.com

 

3. Virtual Race #4 Shanghai Sudu

 

 

이 대회에서는 Action Space 값을 최대로 한 후 훈련 시킨 것이 주요했었던 것 같다.

 

그리고 처음으로 Hyperparameters도 손을 댔다.

뭘 알아서 손 댄건 아니고 Forum에 질문올린것에 대한 답변을 누군가가 (아마도 AWS 직원?) 하면서 자기는 Hyperparameter 몇개를 조정했다고 해서 그냥 따라 했다.

 

이 대회와 관련된 자세한 글은 여기에서 볼 수 있다.

 

https://coronasdk.tistory.com/1012?category=816561 

 

AWS Deepracer 이렇게만 하면 상위권 간다.

AWS Deepracer virtual league에 Dalyo라는 계정과 ChangsooCap 이라는 두개의 계정으로 출전하고 있다. 이번달 Virtual Circuit 성적은 현재 Dalyo가 11초 627로 9등이고 ChangsooCap은 12초 404로 13등이다. Ch..

coronasdk.tistory.com

4회 대회 그리고 내가 3번째로 참가한 이 대회에서는 10.133초를 기록해 35등을 했다.

 

이제 10초대 벽을 깨는게 목표가 됐다.

다음 대회에서 9초대 기록을 깰 수 있을까?

 

 

4. Virtual Race #5 Cumulo Carrera

 

이 Cumulo Carrera 대회에서는 9초대에 진입하려고 무지 애를 썼다.

결국엔 대회 종료 마지막 날에 9.976초를 달성 할 수 있었다.

 

목표를 달성할 수 있었던 데는 다른 사람들의 도움이 컸다.

Forum에 질문을 올리면 대답해주던 사람들, 그리고 Slack의 deepracer-community.slack.com 커뮤니티 사람들의 도움이 있었다.

 

특히 #meetup-seoul 채널에서 만단 nalbam 님과 kim wooglae님의 도움은 아주 컸다. (감사)

 

이 대회에서는 Console을 통해 제공되는 기능을 활용하는 것 이외에 직접 Meta Data 파일에 접근해 Action Space 값을 조절하면서 훈련시키거나 submit 하는 방법을 사용했다.

 

Action Space의 값을 변경하는 방법은 두가지가 있다.

 

하나는 Submit 할 때 적용되도록 하는 방법이다.

변경하는 방법은 아래와 같다.

1. 해당 모델의 Training Section으로 간다.

2. Resources에 있는 Simulation job 아래에 있는 링크를 클릭한다.

3. Simulation application탭을 클릭한 후 SAGEMAKER_SHARED_S3_BUCKET 값과 SAGEMAKER_SHARED_S3_PREFIX 값을 복사해 둔다.

 

여기 까지는 Action Space Meta Data 파일에 대한 정보를 얻는 과정이다.

이제는 S3로 들어가서 해당 파일을 다룰 차례다.

4. S3 서비스를 오픈한다.

5. Bucket list가 나오는데 SAGEMAKER_SHARED_S3_BUCKET 값으로 Search 한 후 해당 버킷을 클릭한다.

6. 그러면 각 모델 리스트가 나오는데 SAGEMAKER_SHARED_S3_PREFIX 값으로 Search 해 해당 모델을 클릭한다.

7. model 폴더를 오픈한다.

8. model_metadata.json 파일을 다운로드해 수정한 후 다시 upload 한다.

 

이렇게 하면 submit 할 때 자신이 수정한 action space 값이 적용되서 진행된다.

 

참고로 이 model 폴더에 있는 모든 파일들을 다운 받은 후 다른 account 의 model 폴더에 업로드해 덮어쓰기를 하면 훈련된 모델을 A account 에서 B account 로 복사할 수 있다.

 

그 다음은 Action Space 값을 변경한 후 그 변경된 값으로 clone 해서 훈련을 진행시키는 방법이다.

 

일단 1번부터 5번까지는 위의 방법과 같다.

 

6. model-metadata 폴더를 오픈한다.

7. 해당 모델 폴더를 오픈한다.

8. model_metadata.json 파일을 다운받아 수정후 다시 upload 한다.

 

이렇게 한 후 해당 모델을 clone 하면 내가 수정한 Action Space 값이 적용되서 traning을 진행하게 된다.

 

 

간략하게 내가 AWS Deepracer Virtual 대회에서 9초대를 기록하기까지의 과정을 정리해 봤다.

 

이쪽에 관심있는 사람들에게 도움이 됐으면 좋겠다.

 

그리고 혹시 내가 사용한 방법 이외에 다른 방법을 시도하신 분들의 경험담을 공유해 주시면 정말 고맙겠다.

 

 

 

 

 

반응형


반응형

 

 

 

 

지난 7월 11일 목요일 뉴욕에서 열렸던 Amazon New York Summit 에 다녀왔다.

거기서 열리는 AWS DeepRacer League 에 참여하기 위해서 이다.

 

 

목요일 금요일 이틀간 휴가를 내고 다녀왔다.

플로리다 올란도에서 뉴저지 뉴왁공항까지 수요일 밤비행기 타고 갔다가 목요일 밤비행기 타고 왔다.

 

완전 녹초..........

 

다녀와서 보니 잘 갔다 왔다.

배운것도 많고 느낀것도 많고....

 

결과는 14.99초로 19등

 

 

나는 Empire City Track에서 경주 하는 줄 알았는데 off line 경주는 re:Invent 2018 에서 하는 거였다.

내가 훈련시킨 모델들은 다 Empire City Track에서만 테스트 했었는데....

 

그래도 14.99초로 19등 한 것은 만족 스럽다. 

on line 에서 주행 하는 것과 off line에서 주행하는 것의 차이점을 체험할 수 있었고 off line 대회는 어떻게 진행 되는지도 알 수 있었다.

 

새벽에 도착한 42번가는 인적도 없는데 바쁘게 움직이는 광고판들로 멈추지 않는 화려함을 뽐내고 있었다.

10여년 전 뉴저지 살 때 와봤던 42번가. 그때의 광고판들보다 따따블은 더 많아진 것 같다.

 

 

이 사진은 경주가 진행됐던 re:Invent 2018 트랙

 

 

 

 

a Cloud GURU의 대표인 라이언도 만나 사진 한컷 부탁했다.

나도 라이언이 진행한 AWS 온라인 강좌를 수강했었다.

 

 

Summit 한바퀴 돌고 받은 선물들....

 

 

확실히 경제가 좋아지긴 한 것 같다.

 

10년전 여기 살 때도 다른 엑스포들 좀 다녔는데...

그 때는 선물이 거의 없었다. 

2008년 경제위기 직후였으니 .... 받아봤자 고작 볼펜 정도 였는데.....

 

이번에는 티셔츠, 양말들도 있고..... 정말 다양하고 좋은 선물들을 많이 얻을 수 있었다.

 

피곤한 여행이었지만 배운것도 많고... 느낀 것도 많고... 좋은 사람들도 만나고... 하여간 좋았다.

 

 

반응형


반응형

 

 

Using Jupyter Notebook for analysing DeepRacer's logs

 

 

Using Jupyter Notebook for analysing DeepRacer's logs - Code Like A Mother

Training a model for DeepRacer involves getting a lot of data and then while you can ignore it, you can also analyze it and use for your own benefit.

codelikeamother.uk

Training a model for DeepRacer involves getting a lot of data and then while you can ignore it, you can also analyze it and use for your own benefit. 

 

 

 

Training a model for DeepRacer involves getting a lot of data and then while you can ignore it, you can also analyze it and use for your own benefit.

 

DeepRacer 모델을 교육하려면 많은 양의 데이터를 가지고 분석한 후 여러분의 모델의 성능향상을 위해 사용하는 것도 필요합니다.

 

 

 

 

 

 

You can spend an hour watching the stream as your car trains and observing its behaviour (and I've done it myself before), but you might not have the time to do this. Also, you might blink, you know? Finally, if your car is fast, like really fast, it could do all 5 evaluation laps in one minute. First you wait 4-7 minutes for the evaluation to start, then you see it take 4-7 minutes to stop. Video? Sorry, you've missed it.

 

차를 한 시간 훈련하는 동안 스트림을 보면서 그 차의 행동을 관찰할 수도 있습니다 (예전엔 저도 이렇게 했습니다.) 하지만 항상 이렇게 시간이 충분이 있지는 않을 겁니다. 또한 당신이 눈을 깜빡이는 동안 무엇인가를 놓칠 수 있죠. 당신의 차가 정말 정말 빠르다면 1분 안에 5번의 평가를 끝낼 수도 있습니다. 일단 평가작업이 시작하기 까지 4~7분을 기다려야 합니다. 그리고 나서 완료 될 때까지 4~7분을 기다립니다. 비디오요? 안 됐지만 당신을 그것을 볼 기회를 놓쳤습니다. 

 

 

Yeah, I'm stretching this a bit too far. Having data you can plot, compile, transform and replay over and over again will always be a handy solution. That's why I love what guys at Amazon have shared in the DeepRacer workshop repository (link takes you to GitHub).

 

 

네, 제가 설명을 질질 끌고 있네요. 데이터를 가지고 플롯, 컴파일, 변환 및 재생하는 방법이 훨씬 더 편리할 겁니다. 그래서 아마존의 사람들이 DeepRacer 워크샵 저장소 (GitHub로 연결되는 링크)를 공유 하는 것을 저는 좋아합니다.

 

 

 

Log analysis

 

While we're here, I hope you'll like this post. Once you're done reading, I'd like to recommend reading about what I have come up with based on this tool in "Analyzing the AWS DeepRacer logs my way" - it might help you and give a couple ideas for your own modifications.

 

이 글이 당신에게 도움이 되길 바랍니다. 일단 읽고 나면, "AWS DeepRacer 로그 분석하기"에서이 도구를 기반으로 작성한 내용을 읽어 보는 것을 추천합니다. 그 글을 읽으면 여러모로 도움이 될 수 있으며 자신 만의 modifications를 위한 몇 가지 아이디어를 얻을 수 있을 겁니다.

 

 

The tools provided include a couple functions to help working with the data, track data, a Jupyter notebook that leads you through the analysis and some sample data.

 

제공된 도구에는 데이터 작업, 데이터 추적, 분석 및 샘플 데이터를 안내하는 Jupyter 노트북을 지원하는 몇 가지 기능이 포함되어 있습니다.

 

 

It lets you assemble aggregated information about your car's performance, plot its behaviour on the track, plot reward values depending on the car's location during evaluation, plot the route during the evaluation (including the virtual race evaluation), analyse the behaviour depending on the visual input, detect which pieces of image matter to the car the most.

 

그것은 당신이 다음과 같은 정보를 집계해서 assemble 하도록 합니다. '자동차의 퍼포먼스', 트랙에서의 행동에 대한 플롯', '평가 기간 동안 자동차의 위치에 근거한 reward 값에 대한 픞롯', '평가 기간 동안 (virtual race evaluation을 포함) route에 대한 plot', 'visual input에 근거한 행동 분석', '자동차에 어떤 image matter들이 가장 많이 탐지 되는지' 등등. 

 

I may have lost my skills in statistics and might not be able to predict future trends based on the time series anymore (I still remember that the classic linear regression model is calculated with ((X'X)^(-1))X'y, I still have dreams of econometrics lectures with Professor Osiewalski), but I can appreciate good statistics when I see them. The guys at Amazon have provided an excellent tool that I have used before to present some images to you. I didn't however know what the ipynb file provided with the tools was. I mean, I managed to open it (GitHub comes with a viewer), but it wasn't until the AWS Summit that I actually installed Jupyter Notebook and understood what power it gives me. Nice!

 

 

나는 통계에서 나의 기술을 잃어 버렸을지도 모른다. 그리고 더 이상 time series에 기초한 미래의 추세를 예측할 수 없을지도 모른다. (나는 고전적인 linear regression model이  (X'X) ^ (- 1) X ' y로 계산된다는 것을 아직 기억하고 있다. 나는 아직 Osijalski 교수의 econometrics 강의를 꿈꾸고 있다.) 그러나 나는 좋은 통계를 볼 때 그것에 고마와 한다. 아마존의 사람들은 내가 당신에게 이미지를 보여주기 위해 사용했던 훌륭한 도구를 제공했습니다. 그러나 ipynb 파일이 제공 한 도구가 무엇인지는 알지도 못했습니다. 내 말은, 나는 그것을 오픈하기는 했지만 (GitHub에는 뷰어가 포함되어 있음), 내가 실제로 Jupyter Notebook을 설치하고 그것이 가지고 있는 강력함을 이해하게 된 것은 AWS Summit이었다. Nice!

 

 

Summit

 

 

Jupyter Notebook is a web application that provides an editor for files containing formatted text, code and its latest results. It can be either hosted or run locally. AWS provides a solution to view notebooks within the SageMaker, but if you tend to leave stuff lying around like me, I wouldn't recommend this solution. The pricing of it matches its usefulness and I tell you, this is a really, really useful tool. I'm exaggerating here, but it does add up if you leave the EC2 running.

 

 

Jupyter Notebook은 형식이 지정된 텍스트, 코드 및 그것들의 최신 결과를 포함하는 파일에 대한 편집기를 제공하는 웹 application입니다. 호스팅되거나 로컬에서 실행될 수 있습니다. AWS는 SageMaker에서 노트북을 볼 수있는 솔루션을 제공하지만 나처럼 물건을 놓고 다니는 경향이 있다면이 솔루션을 권장하지 않습니다. 나는 그것이 가성비 면에서 좋다고 생각한다. 그것은 정말 정말 정말 유용한 툴이다. 조금 과장하는 면이 있지만 만일 당신이 EC2를 달리게 내버려두면 그것은 add up된다.

 

 

 

The code can be in one of many languages, python included. I think more interesting stuff will come out of actually using the notebook.

 

코드는 파이썬을 포함해 다른 많은 언어로 작성될 수 있다. 실제로 노트북을 사용하면 훨씬 더 흥미로운 사실들을 만나보게 될 것이다.

 

 

Installation

 

To install it you need to be familiar with either Python or Anaconda. You will find the installation instructions on their website. I'll leave you with this, I am assuming that if you're here and still reading, you know how to install a Python interpreter and how to install modules.

 

 

설치하려면 Python 또는 Anaconda에 익숙해야합니다. 웹 사이트에서 설치 지침을 찾을 수 있습니다. 일단 파이썬 인터프리터를 설치하는 방법과 모듈을 설치하는 방법을 알고 있다고 가정하겠습니다.

 

 

 

Remeber you can also use Docker if you're familiar with it. This document (takes you to Jupyter documentation) describes how to do that. I think Tensorflow notebook docker image is the closest to what you need to run log analysis notebook without installing everything around.

 

 

익숙하다면 당신은 또한 Docker를 사용할 수 있습니다. 이 문서 (Jupyter 문서로 이동)는 이를 수행하는 방법을 설명합니다. Tensorflow notebook docker image는 모든 것을 설치하지 않고도 로그 분석 노트북을 실행하는 데 가장 가까운 이미지라고 생각합니다.

 

 

 

Note: I am referring to instructions which in most cases contain details for Linux/Mac/Windows. I use Linux and so might miss the shortcomings of how other systems are described, but they do look well written. I am also assuming that you have some level of confidence working either with Python/pip or Anaconda/conda. In case of Python I use Python 3 and I recommend using it. It's time for Python 2.7 to go.

 

 

참고 : 대부분의 경우 Linux / Mac / Windows에 대한 세부 정보가 포함 된 지침을 언급하고 있습니다. 나는 리눅스를 사용하기 때문에 다른 시스템이 어떻게 기술되는지에 대한 단점을 놓치지 만 잘 작성된 것처럼 보인다. 나는 또한 당신이 Python / pip 또는 Anaconda / conda로 어느 정도 자신감을 가지고 있다고 가정하고 있습니다. 파이썬의 경우에는 파이썬 3을 사용합니다. 파이썬 2.7이 나올 때입니다.

 

 

 

Project structure

 

 

In the log-analysis folder you will find a couple things: log-analysis 폴더에는 다음과 같은 것들이 있습니다.

  • intermediate_checkpoint - folder for data used in some of the analysis
  • logs - folder for the logs
  • simulation_episode - you'll be downloading images from the simulation to understand what actions the car is likely to take
  • tracks - folder for the tracks points
  • DeepRacer Log Analysis.ipynb - the notebook itself
  • cw_utils.py - utility methods for downloading of logs
  • log_analysis.py - utility methods for the analysis

 

Dependencies to run log-analysis

 

Before we continue with running the notebook itself, let's have a look at the required dependencies that you can install using pip:

 

노트북을 계속 실행하기 전에 pip를 사용하여 설치할 수있는 관련된 필수 요소들을 살펴 보겠습니다.

 

 

  • boto3 - python library for interacting with AWS
  • awscli - not really needed, but useful - I used it to run aws configure and set up default access to AWS. Once I've done this, I didn't have to provide credentials in code. Click here for installation instructions, then click here for configuration instructions. Remember DeepRacer region is us-east-1. Click here for instructions to set up an IAM user. The roles listed in a page linked there do not provide permissions needed to get a list of streams in a log group, I've learned this one is part of role CloudWatchLogFullAccess or something like that. This is somewhat excessive so you might just want to apply permission DescribeLogStreams
  • numpy, pandas, matplotlib, shapely, sklearn, glob - plotting, listing, showing nice numbers
  • numpy, tensorflow, PIL, glob - analysis of actions probability (picture to action mapping)
  • cv2, numpy, tensorflow, glob - analysis of an image heatmap (what the car cares about when processing the picture)

 

 

Some of those will already be available in your Python/Anaconda bundle or venv evironment. Others you should install yourself. 

 

그 중 일부는 이미 Python / Anaconda 번들 또는 venv 환경에서 사용할 수 있습니다. 다른 것들은 직접 설치해야합니다.

 

 

I may write a bit more about some of them at some point.

 

나는 그들 중 일부에 대해 좀 더 자세히 기술 할 것이다.

 

 

 

 

 

Running the notebook

 

 

We could have done that earlier, but I like being prepared.

 

이전에 다룬적이 있지만 다시 설명하겠습니다.

 

 

 

To run the notebook, go to the log-analysis folder in a terminal and run:

 

notebook을 실행하려면 터미널에서 log-analysis 폴더로 가서 다음을 실행하세요.

 

 

 

jupyter notebook 'DeepRacer Log Analysis.ipynb'

 

 

A browser will open with a Jupyter notebook.

 

 

그러면 브라우저가 열리면서 주피터 노트북이 실행 될 겁니다.

 

 

 

 

Running the code

 

The editor will look more or less like that:  편집기는 아래와 같이 생겼을 겁니다.

 

 

 

The usual stuff: some text, a toolbar, a menu, some code.

When you get on the code section and press "Run" in the toolbar, the code executes, output (if available) gets printed out. That's pretty much how you go through the document: read, execute, analyse results of the code run.

 

 

코드 섹션에서 도구 모음의 "실행"을 누르면 코드가 실행되고 출력 (사용 가능한 경우)이 인쇄됩니다. 코드를 실행하고 결과를 읽고, 실행하고, 분석합니다.

 

 

 

 

You can find a couple more hints about working with Jupyter notebooks in a short document about using notebooks.

 

Jupyter Notebook Viewer

The second idea of mouse based navigation is that cell actions usually apply to the currently selected cell. Thus if you want to run the code in a cell, you would select it and click the button in the toolbar or the "Cell:Run" menu item. Similarly, to copy

nbviewer.jupyter.org

노트북 사용에 관한 간단한 문서에서 Jupyter 노트북으로 작업하는 것에 대한 몇 가지 힌트를 찾을 수 있습니다.

 

 

Analysis

 

 

When you start working with your notebook, be sure to execute the code blocks with imports at the top. They also include an instruction to display plotted images in the notebook.

 

notebook 작업을 시작할 때는 코드 블록 위에 imports를 먼저 한 후에 실행해야 합니다.  또한 노트북에 플롯 된 이미지를 표시하는 지침도 포함됩니다.

 

 

The files currently available in the data folders are samples. You will be using your own and downloading them as you go through the notebook.

 

현재 데이터 폴더에서 사용할 수있는 파일은 샘플입니다. 노트북을 사용하면서 당신의 파일을 다운로드 할 겁니다. 

 

 

The notebook itself has quite a bit of helping code in it like the mentioned downloading of logs or loading the track info. I will not be covering it here.

 

 

노트북 자체는 위에서 언급 한 로그 다운로드 나 트랙 정보로드와 같이 코드 자체에 도움이되는 코드를 가지고 있습니다. 나는 여기서 그것을 다루지는 않을 것이다.

 

 

Plot rewards per Iteration

 

This analysis takes the rewards and calculates mean and standard deviation. It then displays those values per iteration. Also a reward per episode is presented.

 

이 분석은 보상을 취하고 평균 및 표준 편차를 계산합니다. 그런 다음 반복 당 값을 표시합니다. 에피소드 당 보상도 표시됩니다.

 

 

 

 

Analyze the reward function

 

The next section uses the track data and training logs to display where the car goes and what reward it receives.

 

다음 섹션에서는 트랙 데이터와 트레이닝 로그를 사용하여 자동차가 가는 곳과 받는 보상을 표시합니다.

 

 

You can display all the points where the car had a reward function calculated:

 

자동차에 보상 기능이 계산 된 모든 지점을 표시 할 수 있습니다.

 

 

 

 

 

In this one above you can see how the car is all over the place. But then have a look at this one (this is a reward distribution for my AWS Summit London model):

 

 

위 그림을 보면 차가 여기 저기 많이 다녔다는 것을 알 수 있습니다. 그런데 이걸 한번 봐 보세요. (이것은 AWS Summit London 모델의 reward distribution입니다).

 

 

 

 

As you can guess, I trained my model to cut corners and to go straight on the straight line.

 

짐작할 수 있듯이, 나는 내 모델을 코너에서는 안쪽으로 돌고 직선도로에서는 똑바로 가도록 훈련했습니다. 

 

 

You can display a specific iteration:

 

이렇게 특정 iteration을 표시할 수 있습니다.

 

 

 

 

 

You can get top iterations and analyse the path taken:

 

top iterations 에서 통과한 길을 분석할 수도 있습니다.

 

 

 

 

Or just a particular episode:  특정 에피소드만을 볼 수도 있고

 

 

 

Or maybe a particular iteration:  또는 특정 iteration만을 볼 수도 있습니다. 

 

 

 

 

Actions breakdown

 

 

This function is pretty damn impressive, but applicable to the re:invent track only. I might spend some time and make something similar for the London Loop.

 

 

이 기능은 상당히 인상적이지만 re : invent 트랙에만 적용됩니다. London Loop에서도 비슷하게 시간을 투자해서 비슷한 그림들을 만들어 낼 수도 있습니다. 

 

 

 

The output of this function is a graph of decisions taken in different parts of the track. The track is broken down into sections like turns and stuff, then the car's decision process is evaluated and displayed on a histogram. This may help you spot undesired decisions and discourage the car from taking them going forward. Just bear in mind that some wrong actions have a rather low impact and therefore it might be not worth training away from them as you might overtrain.

 

이 기능의 출력은 트랙의 다른 부분에서 취해진 결정의 그래프입니다. track은 sections으로 나눠집니다. 그리고 나서 자동차에 결정과정은 평가되고 히스토그램에 표시됩니다. 이것을 통해 여러분은 원하지 않는 결정을 찾아내고 앞으로는 자동차가 그런 결정을 하지 않도록 할 수 있게 도와 줍니다. 어떤 잘못된 행동은  영향이 미미할 수 있습니다. 그것들을 너무 과도하게 훈련 시키면 별로 효과가 없을 수도 있습니다.

 

 

 

 

 

 

Simulation Image Analysis

 

 

In here you will be loading trained models, loading screens from simulation and observing probability of taking a particular action.

 

여기에서는 숙련 된 모델을 로드하고, 시뮬레이션에서 화면을 로드하고, 특정 작업을 수행 할 probability 을 관찰합니다.

 

 

First you need to download the intermediate checkpoints, then load the session model from the file. The final graph displays separation of probability of taking particular actions. If I understand properly, this can be used to determine how confident the model is about taking a specific action. The bigger difference from the best to second-best action, the better.

 

 

먼저 intermediate checkpoints를 다운로드 한 다음 파일에서 세션 모델을로드해야합니다. 마지막 그래프는 특정 동작을 취할 probability separation 를 표시합니다. 내가 제대로 이해한다면, 이것은 모델이 특정 행동을 취하는 것에 대한 confident 을 결정하는 데 사용될 수 있습니다. 최선책과 차선책 행동의 차이가 크면 클수록 좋습니다.

 

 

I haven't used it before. I guess it will be handy when I understand more of it.

 

나는 전에 이것을 사용하지 않았습니다. 이것에 대해 더 많은 것을 이해할수록 좀 더 편리하게 사용할 수 있을 겁니다.

 

 

 

Model CSV Analysis

 

 

I don't really get this one, sorry. I thin it's just about downloading some metadata about the training and showing distribution of rewards and length of episode (the longer the episodes, the more stable the model.

 

죄송하지만 이 부분은 제가 제대로 이해하지 못했습니다. 나는 훈련에 대한 메타 데이터를 다운로드하고 보상의 분배와 에피소드의 길이를 보여주는 것에 관한 것이라고 생각합니다 (에피소드가 길수록 모델이 더 안정적입니다).

 

 

I will have to learn to understand it better. The description says about downloading the model from DeepRacer Console, but the analysis is happening on some csv file only. Maybe it's part of the model archive?

 

이것을 더 잘 이해하는 법을 배워야 할 것 같습니다. DeepRacer Console에서 모델을 다운로드하는 것에 대한 설명이 있지만 분석은 일부 CSV 파일에서만 발생합니다. 어쩌면 모델 아카이브의 일부일까요?

 

 

Evaluation Run Analysis

 

This is specifically useful since you can look at your evaluations both in the console and in the virtual race.

 

이것은 콘솔과 가상 레이스에서 평가를 볼 수 있으므로 특히 유용합니다.

 


You can load logs from evaluation, then plot them on the track to see the path taken, distance covered, time, average throttle, velocity etc.

 

평가에서 로그를 로드 한 다음 트랙에 그려서 경로, 거리, 시간, 평균 스로틀, 속도 등을 확인할 수 있습니다.


On the plotted images you can see what throttle decision the car has taken.

 

플롯 된 이미지에서 자동차가 취한 throttle 결정을 볼 수 있습니다.

 


Just being able to compare faster and slower results from the evaluation can be very useful in terms of making decisions on future training sessions.

 

보다 빠르고 느린 평가 결과를 비교할 수 있다면 향후 교육 세션에 대한 결정을 내리는 데 매우 유용 할 수 있습니다.

 

 

 

 

 

 

 

 

 

What is the model looking at

 

I haven't used this one yet and I treat it more like a helpful utility to understand what the model cares about. After loading a model and some images, it is possible to get a processed image with highlighted elements that are of value when making decisions. It looks like that:

 

 

나는 이것을 아직 사용하지 않았으며 모델이 무엇을 중요하게 생각 하는지를 이해하는 데 도움이되는 유틸리티처럼 취급합니다. 모델 및 일부 이미지를로드 한 후에는 결정할 때 가치가있는 강조 표시된 요소가있는 처리 된 이미지를 가져올 수 있습니다. 그것은 다음과 같이 보입니다.

 

 

 

Example

 

 

Let's say I want to analyse one of my virtual race evaluations. I want to see the race information from when I managed to do the 23 seconds. I located the log stream starting with sim-ynk2kzw3q7lf, located in /aws/deepracer/leaderboard/SimulationJobs.

 

내 가상 경주 평가 중 하나를 분석하려고한다고 가정 해 봅시다. 나는 23 초를 할 수 있었을 때부터 경주 정보를보고 싶다. / aws / deepracer / leaderboard / SimulationJobs에있는 sim-ynk2kzw3q7lf로 시작하는 로그 스트림을 찾았습니다.

 

 

 

Then load the track data: 그리고 트랙 데이터를 로드 합니다.

 

 

 

Then fetch the evaluation data and plot it (some small corrections to the code needed): 

 

그런 다음 평가 데이터를 가져 와서 플롯합니다 (코드에 약간의 수정이 필요함).

 

 

 

 

The result comes as a scrollable frame. It's quite annoying and can be expanded by a single click on the left margin:

 

결과는 스크롤 가능한 프레임으로 제공됩니다. 매우 성가 시며 왼쪽 여백을 한 번 클릭하여 확장 할 수 있습니다.

 

 

 

 

Alternatively you can disable it in code as described on Stack Overflow.

 

또는 스택 오버플로에서 설명한대로 코드에서 비활성화 할 수 있습니다.

 

 

From here you can clearly see I had a stable, but slow model. I made some advancements from that point, but I'll wait with sharing them till the London Loop virtual race is over.

 

여기서 안정적이지만 느린 모델을 분명히 볼 수 있습니다. 나는 그 시점부터 몇 가지 발전을 이루었지만, 런던 루프 가상 경주가 끝날 때까지 나눠서 기다릴 것입니다.

 

 

Track data

 

프로젝트에 몇 가지 샘플 트랙이 포함되어 있습니다. London Loop은 그들 중 하나가 아니기 때문에 가상의 인종에 참여하고 있기 때문에 스스로 준비했습니다. 내가 제기 한 GitHub 끌어 오기 요청에서 가져올 수 있습니다. 다행스럽게도 곧 병합 될 예정이지만, 지금은 워크샵 저장소에 가입하여 거기에서 London_Loop_track.npy 파일을 다운로드 할 수 있습니다.

 

 

Bulk logs download

 

The cw_utils.py is missing a method to download all of the logs in a given group. It would be handy, so I wrote my own: https://github.com/aws-samples/aws-deepracer-workshops/pull/20.

 

Parameters:

 

 

  • pathprefix is beginning of a relative file path,
  • log_group is the log group in CloudWatch that you want to download the logs from. The log groups you will be interested in are:
    • /aws/robomaker/SimulationJobs - logs from training simulations and evaluations,
    • /aws/deepracer/leaderboard/SimulationJobs - logs from evaluations submitted to a virtual race,
  • not_older_than - date string to provide the lower time limit for the log event streams; if there is at least one log event newer than that, the stream will be downloaded; For today logs (19th of May) I set it to 2019-05-19; refer to dateutil documentation to learn about accepted formats,
  • older_than - upper limit date, pretty similar as not_older_than but the other way round; If you set it to 2019-05-19, the newest entries in accepted stream will be from 2019-05-18 23:59:59.999 at the latest.

 

Return value is a list of tuples containing:

 

  • log file path
  • simulation id
  • first log event timestamp
  • last log event timestamp

 

Entries are ordered by occurence of the last timestamp event.

 

 

If you are using a non-root account to access DeepRacer, you may be needed permissions to run method describe_log_streams.

 

 

If you call the method with pathprefix value ooh/eeh/ooh/ah/aah/ting/tang/walla/walla/bing/bang/deepracer-eval- and log_group is /aws/deepracer/leaderboard/SimulationJobs, and there is a log_stream for simulation sim-l337h45h, the file created will be ooh/eeh/ooh/ah/aah/ting/tang/walla/walla/bing/bang/deepracer-eval-sim-l337h45h.log

 

 

That's all folks

 

 

I'm not pretending I know much about the notebooks. Two weeks ago I didn't understand what they were or how to use them. This one has proven to be extremely useful when analysing my model's performance, and not only mine.

Well done, you've made it this far! Once again, let me mention my modification of the tool: "Analyzing the AWS DeepRacer logs my way" - I have raised a Pull Request to AWS with this change but you can already enjoy it now from my fork.

 

나는 노트북에 대해 많이 알고있는 척하지 않습니다. 2 주 전 나는 그들이 무엇인지, 어떻게 사용하는지 이해하지 못했습니다. 이 모델은 내 모델의 성능을 분석 할 때 매우 유용하다는 것이 입증되었습니다.

잘 했어, 너 지금까지 해냈어! "AWS DeepRacer 로그 분석하기"-이 변경으로 AWS로 끌어 오기 요청을 제기했지만 현재 내 포크에서 이미이 기능을 사용할 수 있습니다.

 

 

 

Great thanks to Lyndon Leggate for spotting that I misused the logs api initially. Lyndon is currently in top 10 in London Loop as well, he started the discussion group on Slack that I mentioned in my earlier posts. You are most welcome to join it: click here.

 

 

처음에 로그 API를 오용 한 점을 발견 한 Lyndon Leggate에게 감사드립니다. 린든은 현재 런던 루프 (London Loop)에서도 톱 10에 속해 있으며, 이전 글에서 언급 한 슬랙 (Slack)에 대한 토론 그룹을 시작했습니다. 가입하시는 것이 가장 좋습니다 : 여기를 클릭하십시오.

 

 

 

I will be soon writing about the First AWS DeepRacer League Virtual Race called London Loop which I'm taking part in. It's different from the London Loop and much bigger in scale - almost 600 participants so far (and more to come, I'm sure), still two weeks left to compete, top lap of 12.304 seconds and fifty best entries are within a second of that. And I'm fifth at the moment :)

 

 

런던 루프와 다른 점은 규모가 훨씬 더 큰 것입니다. 거의 600 명의 참가자가 참여하고 있습니다. 물론), 경쟁하기 위해 2 주 남겨 뒀다, 12.304 초의 최고의 랩과 50 최고의 항목은 그것의 두 번째 이내에있다. 그리고 나는 지금 5 번째입니다 :)

 

 

 

 

Race on!

 

 

반응형


반응형

6월에 열린 아마존 AWS Deepracer Virtual Race #2 에서 20.695초 104등을 차지했다.

총 참가자 수는 572명이다.

 

 

나는 Sofia와 Dalyo 두 종류의 모델을 훈련 시키고 있다. 

이번 대회에서는 주로 Sofia를 출전 시키다가 나중에 Dalyo를 출전 시켰다.

최고 점수는 Dalyo로 따냈다.

 

Sofia와 Dalyo 두 모델의 근본적인 차이점은 Action Space 였다.

 

Sofia Action Space

Dalyo Action Space

Sofia는 최고 스피드를 5로 했고 Dalyo는 최고 스피드를 8로 했다.

아무래도 최고 스피드를 더 높게 만든 Dalyo가 더 성적이 좋게 나온 것 같다.

처음에는 Sofia가 안정적이라서 계속 참가를 시켰다.

Dalyo는 거의 완주를 못 했었다. 아마 속도가 너무 빨리서 트랙을 벗어나는 경우가 많았나 보다.

 

나중에 좀 더 연습을 시켜서 Dalyo의 완주율이 50% 정도 올랐다. 

그 완주한 점수들이 Sofia 모델보다 훨씬 높아서 결국 최고 점수는 Dalyo가 차지 했다.

 

이 Action Space는 최초 모델을 만들 때 정해주고 그 다음에는 수정이 불가능 하게 돼 있다.

 

그래서 이 Sofia와 Dalyo를 훈련 시킬 때는 주로 reward_function을 수정해 가면서 성능 향상을 시키려고 노력했다.

 

Sofia는 거의 10번 훈련 시켰었는데 처음 Sofia를 탄생 시켰을 때의 reward_function은 다음과 같다.

 

def reward_function(params):
    '''
    Example of rewarding the agent to follow center line
    '''
    
    # Read input parameters
    track_width = params['track_width']
    distance_from_center = params['distance_from_center']
    
    # Calculate 3 markers that are at varying distances away from the center line
    marker_1 = 0.1 * track_width
    marker_2 = 0.25 * track_width
    marker_3 = 0.5 * track_width
    
    # Give higher reward if the car is closer to center line and vice versa
    if distance_from_center <= marker_1:
        reward = 1.0
    elif distance_from_center <= marker_2:
        reward = 0.5
    elif distance_from_center <= marker_3:
        reward = 0.1
    else:
        reward = 1e-3  # likely crashed/ close to off track
    
    return float(reward)

 

이 때의 트랙은 직선 트랙이었다.

그래서 중앙선에 가까운 경우 reward를 주는 로직을 만들어서 훈련 시켰다.

 

두번째도 Straigh track인데 함수를 조금 바꿨다.

 

def reward_function(params):
    '''
    Example of rewarding the agent to follow center line
    '''
    reward=1e-3
    
    # Read input parameters
    track_width = params['track_width']
    distance_from_center = params['distance_from_center']
    steering = params['steering_angle']
    speed = params['speed']
    all_wheels_on_track = params['all_wheels_on_track']
    
    if distance_from_center >=0.0 and distance_from_center <= 0.03:
        reward = 1.0
    
    if not all_wheels_on_track:
        reward = -1
    else:
        reward = params['progress']
        
    # Steering penality threshold, change the number based on your action space setting
    ABS_STEERING_THRESHOLD = 15

    # Penalize reward if the car is steering too much
    if steering > ABS_STEERING_THRESHOLD:
        reward *= 0.8
        
    # add speed penalty
    if speed < 2.5:
        reward *=0.80
    
    return float(reward)

 

완전 Stupid 한 script 이다.

추가한 부분에 reward = reward+n 이런식으로 reward가 더해지거나 빼지는 방식으로 스크립트를 작성했어야 했는데 멍청하게도 그냥 reward 값을 그냥 대입하는 방식으로 돼 있다.

 

위에 보면 차량이 center로 부터 0.03 이상 떨어지지 않으면 reward를 1.0으로 설정하고 그 다음에 트랙 안에 있지않으면 reward = reward-1을 해야 하는데 그냥 reward=-1을 해 버렸다.

reward=params['progress']도 reward=reward+['progress']로 했어야 했다.

 

하여간 이러한 실수가 이 후에도 계속 됐고 후반부에 가서야 이것이 잘 못 됐다는 걸 발견 했다.

 

위 스크립트에서는 distance_from_center 부분은 아무런 역할을 하지 않는 부분이 돼 버렸다.

 

세번째는 Oval Track에서 훈련 시켰고 네번째는 London loop에서 훈련 시켰다.

 

 

 

다섯번 째 부터 6월 대회에서 사용했던 Kumo Torakku Track에서 훈련 시켰다.

그리고 Dalyo라는 새 모델도 만들었다.

 

이 때부터 Sofia와 Dalyo 두 모델을 훈련 시켰지만 Sofia가 대부분 완주를 하고 Dalyo는 그렇지 못했기 때문에 계속 Sofia만 출전 시켰었다.

 

Sofia는 이후 10여번 훈련 시켰고 Dalyo는 4번 정도 더 훈련 시켰다.

 

Sofia의 Kumo Torakku 트랙의 마지막 reward_function은 이렇다.

 

def reward_function(params):
    '''
    Example of rewarding the agent to follow center line
    '''
    reward=1e-3
    
    # Read input parameters
    track_width = params['track_width']
    distance_from_center = params['distance_from_center']
    steering = params['steering_angle']
    speed = params['speed']
    all_wheels_on_track = params['all_wheels_on_track']
    
    if not all_wheels_on_track:
        reward = -1
    else:
        reward = params['progress']
        
    # add speed penalty
    if speed < 1.0:
        reward *=0.80
    else:
        reward += speed
    
    return float(reward)

 

speed를 좀 더 빨리 하기 위해 reward에 현재의 스피드를 더하는 로직을 사용했다.

근데 별 효과는 없었다.

 

Dalyo의 Kumo Torakku 트랙 마지막 reward_function도 똑 같다.

 

이 Virtual 경기 대회는 횟수에 상관없이 계속 모델을 참가 시킬 수 있기 때문에 (30분 간격으로) 나중에는 완주 횟수는 떨어지지만 점수가 더 높게 나오는 Dalyo 모델을 주로 출전 시켰고 결국은 104 등을 기록 했다.

 

 

7월에는 3번째 Virtual 대회가 열리고 1번의 offline 대회가 뉴욕에서 열릴 예정이다.

트랙은 둘 다 Empire City 트랙.

가능하면 둘 다 참가할 계획이다.

 

6월 Virtual 대회를 참가하면서 배운 것은 두가지

1. reward_function을 바꾼 후 당초 예상대로 그 변경 내용이 적용 되는지 확인 하는 것이 필요하다.

   AWS의 Debugging 툴을 활용해서 확인 해야 겠다.

2. 훈련을 위해 사용 되는 3가지 주요 세팅 중 Hyperparameter 를 활용한 성능 향상 방법을 배워야 겠다.

 

Action Space는 최초 모델을 생성할 때 설정하고 그 이후에는 변경이 불가능 하다.

reward_function과 Hyperparameter는 변경 가능한데 지금까지는 reward_function만 변경하면서 훈련 시켰다.

 

Hyperparameter는 잘 몰랐기 때문이다.

이제 Hyperparameter를 공부해서 이 부분도 모델 성능 향상에 활용해야 20초 대를 넘을 수 있을 것 같다.

 

 

이번 Empire City 트랙을 사용하는 Virtual Circuit에서는 15초대 진입과 50등 이내로 돌입하는 것을 목표로 참가할 계획이다.

 

화이팅!!!!!!!!!!!!

반응형

MEGAZONE CLOUD AWS DeepRacer League in Korea

2019. 6. 25. 22:23 | Posted by 솔웅


반응형

메가존 클라우드 AWS Deepracer League가 개최 됩니다.

 

참가 신청은 이곳에서 하실 수 있네요.

 

https://www.megazone.com/deepracer_league_01/ 

 

제1회 메가존 클라우드 AWS DeepRacer 리그 참가 신청

Asia-Pacific & KOREA, No.1 AWS Premier Consulting Partner 메가존 클라우드가 제1회 AWS DeepRacer 대회를 개최 합니다.

www.megazone.com

1등 상금이 100만원에 라스베가스 re:Invent 왕복 항공권 및 숙박권...... 와우....

 

AWS DeepRacer 차량 모델이 있으신 분은 참가하시면 좋겠네요.

(관계자 분이 확인해 주셨는데 DeepRacer 차량이 없어도 된답니다. 관심 있으신 분들은 일단 가셔서 주최측에 있는 차량을 이용해서 참가 하실 수 있답니다.)

 

제가 사는 곳은 조그만 동네라서 차량을 트랙에서 직접 테스트 해 볼 기회를 갖기 무척 힘듭니다.

한국에 계신 분들은 아주 좋은 기회인 듯 합니다.

 

참가하셔서 좋은 결과 있으시길 바랍니다.

 

 

Asia-Pacific & KOREA, No.1 AWS Premier Consulting Partner 메가존 클라우드가 AWS DeepRacer 리그를 개최 합니다.

 

실제 트랙이 설치되어 참가자가 직접 제작한 모델을 실제 차량(Agent)으로 주행 가능하며, 우승 시 상금 외 미국 re:Invent 기간 왕복 항공 및 숙박권이 주어지오니, 많은 신청 부탁 드립니다.

 

  • 대회명 : MEGAZONE CLOLUD Circuit Challenge
  • 대회 일자 : 2019년 7월 4일 (목)
  • 대회 장소 : 신도림 쉐라톤서울 디큐드시티호텔 6층 그랜드볼룸 [약도]
  • 대회 시간 : 오전 10시 ~ 오후 05시
  • 시상
    – 1등 : 상금 100만원 + re:Invent 왕복 항공 및 숙박권
    – 2등 : 상금 50만원
    – 3층 : 상금 30만원
  • 참가 자격 : 직접 제작한 AWS DeepRacer 모델을 보유한 사람

 

메가존 클라우드와 함께 세계 신기록에 도전해 보세요. [세계 기록 보기]

※ 본 리그는 메가존 클라우드가 자체적으로 진행하는 행사로 AWS에서 개최하는 리그와 무관합니다.
※ 상금 및 경품 지급 시 소득세 등 제세공과금이 차감 혹은 청구 됩니다.
※ 본 경기 규칙은 AWS DeepRacer League 규칙을 따르며 트랙 또한 re:Invent 2018 트랙에서 진행 됩니다.

 

========================================

 

저는 AWS Deepracer 모델 차량을 7월 중순에 받을 예정이라서 10월 3일 토론토에서 열리는 경기에 참가할 수 있을 것 같습니다.

 

휴가 내고 비행기 타고 가서 참가할 생각인데.... 

어떻게 될 지 아직...... 

 

지금 제가 만들고 있는 모델은 Kumo Torakku 트랙에서 23초를 기록하고 그 이후에는 전혀 기록이 나아 지질 않고 있습니다.

지금 1,2,3 등은 모두 10초 대 이던데.... 그런 기록은 어떻게 하면 낼 수 있을 지.......

 

3등은 Kimwooglae인걸로 봐서 한국분인것 같네요.

 

 

어떻게 연락해서 방법 좀 배울 수 없을까?

 

혹시 AWS Deepracer 공부하는 커뮤니티 있으면 알려 주세요.

혼자 공부하는 것 보다 서로 경험 공유하면서 배우면 훨씬 좋을 것 같습니다.

 

 

반응형


반응형

AWS DeepRacer League

 

누구나 이용할 수있는 세계 최초의 자율주행 레이싱 리그에 오신 것을 환영합니다.

re:Invent 2019에서 AWS DeepRacer Cup에서 우승해 상금, 영예, AWS DeepRacer Championship Cup을 획득하세요. 매월 열리는 가상 Circuit 레이스에서 온라인으로 경쟁하거나 전 세계 Summit Circuit 경주 행사에서 직접 경쟁 할 수 있습니다.

 

Standings: Check out the live leaderboard and latest race results

view Leaderboards >>

 

AWS DeepRacer console을 통해 가상공간에서 레이스

현재 진행 중인 레이스 - Kumo Torakku

 

일본의 스즈카 트랙에서 영감을 얻은 Kumo Torakku 서킷 레이싱에서 우승해 re:Invent로 가는 경비 지원을 받으세요.  후지산을 보며 도쿄의 거리를 달려 승리하세요. 그리고 포인트와 상금을 획득하세요. 그리고 AWS re:Invent 2019에서 열리는 AWS DeepRacer Championship Cup 출전권도 받으세요.

 

AWS Free Tier를 사용하면 최대 10 시간동안 훈련을 무료로 진행할 수 있습니다. 그러니까 AWS DeepRacer League에 아무런 비용을 들이지 않고 참여해 보세요. 

 

Race Online       View Live Leaderboard  

 

 

AWS DEEPRACER TV

Take a step inside the AWS DeepRacer League. Episode 1 follows the competition to Amsterdam, featuring developers of all skill levels hoping to qualify for a chance to win the Championship Cup at AWS re:Invent 2019. Tune in now to learn more about their strategy and how you can build and tune a model for a chance to win!

 

AWS DeepRacer League를 향해 한걸음 내디뎌 보세요. 에피소드 1은 암스테르담에서의 경기가 나옵니다. AWS re:Invent 2019 에서 열리는 Championship Cup에서 우승하는 것을 목표로 하는 다양한 수준의 개발자들이 나옵니다. 그들의 전략에 대해 배워 보세요 그리고 여러분은 우승을 위해 어떻게 모델을 만들어 갈 것인지 생각해 보세요.

 

 

 

Pick a race

 

개발자는 매월 공개 될 유명한 raceways에서 영감을 얻은 가상 트랙을 통해 Virtual Circuit에서 경쟁함으로써 테스트에 자신의 기술을 적용 할 수 있습니다. re:Invent 2019에서 열리는 AWS DeepRacer Championship Cup에 대한 경비와 경품을 얻기 위해 경쟁 할 수 있습니다.

20 개의 AWS Summit 중 아무 Summit이나 선택해서 직접 참가하세요. (참여하고 싶으면 여러 Summit에 참여하실 수 있습니다.) 워크샵을 통해 re:MARS 및 re:Invent 2019에서 모델을 어떻게 만들고 트레이닝 시킬 지에 대해 도움을 드릴 겁니다. 여러분이 트레이닝 시킨 모델을 집으로 가져 가실 수도 있습니다. 그리고 그 모델을 테스트하고 엑스포에서 그것을 가지고 경쟁하실 수 있습니다.

 

See Full Schedule and Standings

 

Pick up some racing tips

 

기계 학습을 처음 사용 하던지 기존 스킬을 기반으로 준비하든, 우리는 당신이 경주 준비를하는 것을 도울 수 있습니다. e- 러닝 수업 인 AWS DeepRacer : 강화 학습 학습에 의해 시작할 수 있습니다. 약 90 분 후에 보강 학습 (자율 차량 교육에 이상적인 기계 학습의 한 분야)의 기본 사항과 AWS DeepRacer - 트랙을 누빌 준비가되었습니다!

 

Take the e-learning course

 

 

Race, win prizes, score points

 

Summit Circuit 레이스 또는 Virtual Circuit 레이스에서 경쟁하십시오. 경쟁하는 레이스 수에는 제한이 없습니다. Virtual Circuit 또는 Summit Circuit 레이스 중 하나만이라도 1등을 하면 라스베가스에서 열리는 AWS re:Invent에서 열리는 AWS DeepRacer final round에 참가하는 경비를 지원받으실 수 있습니다. 각 레이스의 10등까지는 AWS DeepRacer 자동차를 받으실 수 있습니다. 

각 레이스마다 참여하시면 포인트를 받게 됩니다. 2019년 말에 이 포인트들의 합계를 가지고 등수를 가립니다. 각 Circuit의 우승자에게는 re:Invent 2019의 AWS DeepRacer knockout(final) 라운드에 참가하는 경비를 지원 받으실 수 있습니다. 자세한 내용은 AWS DeepRacer 공식 규칙을 참조하십시오.


레이스에 참가하면 2019 AWS DeepRacer League Champion이 될 수 있습니다!

 

Learn more about points and prizes

 

 

 

2차 Virtual Race 오픈

 

Kumo Torakku Track

 

 

 

 

 

반응형


반응형

본격적으로 AWS Deepracer를 시작했다.

첫번째 모델을 만들었다.

 

일단 트랙은 제일 간단한 것으로 선택 하고 Speed는 5로 선택.
직선도로니까 빠르게 달리게 만드는게 더 좋을 것 같아서.
나머지는 다 디폴트.


Reward_function도 그냥 디폴트 사용하고 시간은 30분
참고로 디폴트 함수는 아래와 같다.

 

def reward_function(params):
    '''
    Example of rewarding the agent to follow center line
    '''
    
    # Read input parameters
    track_width = params['track_width']
    distance_from_center = params['distance_from_center']
    
    # Calculate 3 markers that are at varying distances away from the center line
    marker_1 = 0.1 * track_width
    marker_2 = 0.25 * track_width
    marker_3 = 0.5 * track_width
    
    # Give higher reward if the car is closer to center line and vice versa
    if distance_from_center <= marker_1:
        reward = 1.0
    elif distance_from_center <= marker_2:
        reward = 0.5
    elif distance_from_center <= marker_3:
        reward = 0.1
    else:
        reward = 1e-3  # likely crashed/ close to off track
    
    return float(reward)

 

소스코드를 보니 트랙 중앙선 가까이 가면 점수(reward)를 더 많이 주는 간단한 로직이다.

30분 트레이닝 시키고 곧바로 Evaluate
결과는 조금 있다가…

 

 

 


이 첫번째 모델을 clone 해서 두번째 모델을 만들었다.
다 똑같고 reward function 함수만 내가 원하는 대로 조금 바꾸었다.

 

def reward_function(params):
    '''
    Example of rewarding the agent to follow center line
    '''
    reward=1e-3
    
    # Read input parameters
    track_width = params['track_width']
    distance_from_center = params['distance_from_center']
    steering = params['steering_angle']
    speed = params['speed']
    all_wheels_on_track = params['all_wheels_on_track']
    
    if distance_from_center >=0.0 and distance_from_center <= 0.03:
        reward = 1.0
    
    if not all_wheels_on_track:
        reward = -1
    else:
        reward = params['progress']
        
    # Steering penality threshold, change the number based on your action space setting
    ABS_STEERING_THRESHOLD = 15

    # Penalize reward if the car is steering too much
    if steering > ABS_STEERING_THRESHOLD:
        reward *= 0.8
        
    # add speed penalty
    if speed < 2.5:
        reward *=0.80
    
    return float(reward)

 

이전에 썼던 디폴트 함수와는 다르게 몇가지 조건을 추가 했다.
일단 중앙선을 유지하면 좀 더 점수를 많이 주는 것은 좀 더 간단하게 만들었다.
이 부분은 이전에 훈련을 했으니까 이 정도로 해 주면 되지 않을까?
그리고 아무 바퀴라도 트랙 밖으로 나가면 -1을 하고 모두 트랙 안에 있으면 progress 만큼 reward를 주었다.
Progress는 percentage of track completed 이다.
직진해서 결승선에 더 가까이 갈 수록 점수를 더 많이 따도록 했다.
이건 차가 빠꾸하지 않고 곧장 결승점으로 직진 하도록 만들기 위해 넣었다.
그리고 갑자기 핸들을 과하게 돌리면 차가 구르거나 트랙에서 이탈할 확률이 높으니 핸들을 너무 과하게 돌리면 점수가 깎이도록 했다. (15도 이상 핸들을 꺾으면 점수가 깎인다.)
그리고 속도도 너무 천천히 가면 점수를 깎는다.

속도 세팅이 최대 5로 만들어서 그 절반인 2.5 이하고 속도를 줄이면 점수가 깎인다.


이렇게 조건들을 추가하고 Training 시작.
이건 좀 복잡하니 트레이닝 시간을 1시간 주었다.

이 두개의 모델에 대한 결과는…

 

딱 보니 첫번째 디폴트 함수를 사용했을 때는 시간이 갈수록 결과가 좋게 나왔다.
그런데 두번째는 시간이 갈수록 실력이 높아지지는 않는 것 같다.


너무 조건이 여러개 들어가서 그런가?

 

 


생각해 보니 조건을 많이 넣는다고 좋은 것은 아닌것 같다.
일반적으로 코딩을 하다 보면 예외 상황을 만들지 않게 하기 위해 조건들을 아주 많이 주는 경향이 있는데 이 인공지능 쪽은 꼭 조건을 많이 줄 필요는 없을 것 같다.


앞으로 인공지능 쪽을 하다보면 일반 코딩에서의 버릇 중에 고칠 것들이 많을 것 같다.
Evaluation 결과를 보면 두개의 차이가 별로 없다. 
두 모델 모두 3번중 2번 완주 했고 완주시간도 비슷한 것 같다.
조건을 쪼금 더 준 Model 2 가 좀 더 낫긴 하네. (0.2 ~0.3 초 더 빠르다.)
다음은 곡선이 있는 다른 트랙으로 훈련을 시킬 계획이다.


그런데 곡선이 있는 트랙에서는 스피드가 무조건 빠르다고 좋은 건 아닌 것 같다.
내가 스피드를 5로 주었는데 Clone을 만들어서 할 때는 이 스피드를 조절하지 못하는 것 같다.


곡선 구간에서는 reward_function을 어떻게 주어야 하지?

반응형


반응형

AWS DeepRacer 공부를 시작하고 나서 뜻밖의 요금이 징수되서 조금 놀랐다.

지난주 매일 1달러씩 부과 되던 NAT Gateways 서비스를 언제 세팅 했는지 몰랐었다.

여하튼 아마존 Support 서비스의 도움을 받아서 해당 서비스를 Delete 한 이후 요금은 더이상 부과 되지는 않았는데...

문제는 그걸 지우고 난 이후 Deepracer model을 생성하면 에러가 발생해서 일을 더이상 진행 할 수가 없었다.

 

도저히 안되서 새로운 account를 생성하고 Deepracer Model을 하나 생성했다.

 

그랬더니 모델과 더불어 NAT Gateways가 생성되더라.

이제 알았다. 처음 DeepRacer를 시작하기 위해 account resources를 생성할 때 이 NAT Gateway가 생성된다는 것을....

 

이 단계를 완료하면 NAT Gateways가 생성돼 매일 1달러씩 청구된다.

참고로 아래는 DeepRacer 모델을 하나 생성하면 청구되는 금액들이다.

 

 

* Data Transfer에서 $0.010 per GB - regional data transfer - in/out/between EC2 AZs or using elastic IPs or ELB 가 5기가가넘어서 5센트가 청구됐다. 

* 그 다음 EC2에서 NAT Gateway가 생성과 연계해서 2.10 달러가 청구 됐다.

* 여기서 NAT Gateway는 매일 1달러씩 청구되게 돼 있다.

* 그 다음은 시뮬레이터 서비스인 RoboMaker 서비스에 대해 1.52달러가 청구됐고 SageMaker는 30센트가 청구됐다.

 

다른 서비스들은 사용하면 사용한 시간만큼만 내는데 NAT Gateway는 24시간 계속 돌아가기 때문에 매일 1불씩 내야 한다.

사용하지 않을 때 Stop 할 수도 없는 것 같다.

 

하루에 1달러이면 아무것도 아니라고 생각할 수 있지만... 아무것도 하지 않는데도 청구가 되니 속이 쓰렸다.

 

그래서 AWS Support Case를 하나 더 Raise 했다.

 

오늘 아침에 했는데 퇴근 할 때 쯤 답변이 오더라구.

 

 

다음은 답변 내용...

 

Hey there! This is Merlyn, your AWS Billing & Accounts specialist. I hope you are having a wonderful day! I understand you have some concerns about the DeepRacer pricing and you bill. Worry not as I'm here to help.


DeepRacer의 가격 책정과 청구서와 관련 우려가 있다는 것에대해 이해합니다. 걱정마세요. 제가 도와드릴테니까요.

With AWS DeepRacer console services, there are no charges for driving the AWS DeepRacer car or upfront charges. You will only pay for the AWS services you use. You will be billed separately by each of the AWS services used to provide the AWS DeepRacer console services such as creating and training your models, storing your models and logs, and evaluating them, including racing them in the virtual simulator or deploying them to their AWS DeepRacer car. You will see the bill for each service on your monthly billing statement.

AWS DeepRacer console services는 AWS DeepRacer 차량을 driving 하는데에 대한 가격 책정이나 선불로 어떠한 돈을 낼 필요가 없습니다. 당신이 AWS services를 사용한 것에 대한 요금만 지불 하시면 됩니다. AWS DeepRacer console services를 통해 사용하시는 AWS services들에 대해 각각의 사용료가 따로 청구 될 것입니다. 예를 들어 당신의 모델들을 생성하고 훈련하는 것 그리고 그 모델들이나 로그들을 저장하는 것, 그 모델들을 평가하는 작업, 가상 시뮬레이터에서 모델의 레이싱을 하거나  AWS DeepRacer car에 배치하는 일들에 대해 따로 사용료가 부과 됩니다. 

Upon first use of AWS DeepRacer simulation in the AWS console, new customers will get 10 hours of Amazon SageMaker training, and 60 Simulation Unit (SU) hours for Amazon RoboMaker in the form of service credits; $6 for SageMaker, $24 for RoboMaker and $34 for NAT Gateway. The service credits are available for 30 days and applied at the end of the month. A typical AWS DeepRacer simulation uses up to six SUs per hour, thus you will be able to run a typical AWS DeepRacer simulation for up to 10 hours at no charge.


AWS console에서 AWS DeepRacer simulation을 처음 사용하는 새 고객에게는 10시간 동안 Amazon SageMaker training과 Amazon RoboMaker에 대해 60 Simulation Unit (SU) 시간에 대해 service credits이라는 형태로 무료 사용하실 수 있는 기회를 드립니다. SageMaker는 $6 , RoboMaker $24 그리고  NAT Gateway는 $34 입니다. 이 service credits은 30일간 사용 가능하고 그 달의 말에 적용됩니다. 일반적으로  AWS DeepRacer simulation 사용은 시간당 six SUs이므로 당신은 AWS DeepRacer simulation을 10시간 동안 무료로 사용하실 수 있습니다. 

Please visit the link below for more pricing information:

자세한 가격 정보는 아래 링크를 참조하세요.

https://aws.amazon.com/deepracer/pricing/ 

This inquiry is a bit out of our scope. Here in the Billing and Accounts department we don’t handle technical questions. Please note that AWS unbundles technical support in order to lower the prices of the services themselves instead of billing it to all customers disregarding if they’d like to use it or not. However, if technical support is required, I would like to tell you we offer several Premium Support plans starting at just $29 a month, which enables you to speak to an engineer by email, chat, or phone depending on what support plan you choose.


당신의 요청 중 일부는 우리의 권한을 벗어나는 부분이 있습니다. 저희 Billing and Accounts department에서는 기술적인 질문은 다루지 않습니다. 저희는 모든 고객에게 비용 지불의 부담을 주지 않고 저렴한 가격을 유지하기 위해 기술 부문의 Support는 분리해서 다루고 있습니다. 만약 기술적인 support를 원하시면 월 $29 부터 시작되는 Premium Support plan 서비스들이 있습니다. 이 유료 기술 지원 서비스를 이용하시면 이메일, 채팅 그리고 전화등 여러분이 편한 방식으로 엔지니어들과 소통을 할 수 있습니다.

All Premium Support plans include direct access to Support Engineers, and these plans offer a tailored support experience that allows you to select the support level that best fits your needs. More information including pricing and how to sign up can be found here:


모든 Premium Support plan 들에는 Support Engineers과의 직접 access 권한이 포함되며 이러한 플랜들에서는 사용자의 요구 사항에 가장 적합한 서포트 레벨을 선택할 수 있는 맞춤 지원 서비스가 제공 됩니다. 가격 정책 및 가입 방법에 대한 자세한 정보는 여기를 참조하세요.

https://aws.amazon.com/premiumsupport/pricing/ 

You can also search the Amazon Web Services Developer forums or post a new question. Our technical staff and the Amazon Web Services developer community participate in the forums regularly and might be able to help you. Generally, questions are responded to in about a day, though there is not a guaranteed response time.


그 외에  Amazon Web Services Developer forums에서 궁금한 내용을 검색해서 찾는 방법도 있습니다. 또한 새로운 질문을 이곳에 올릴 수도 있구요. Our technical staff와 Amazon Web Services developer community에 참여하는 개발자들로 부터 지원을 받으실 수도 있을 겁니다. 일반적으로 하루 정도 기다리면 답변을 받습니다. (응답 시간에 대한 보장은 없지만요.)

https://forums.aws.amazon.com/index.jspa 

Finally, you may also want to check out our AWS Support Knowledge Center and AWS Documentation pages located here:  

마지막으로 AWS Support Knowledge Center 및 AWS Documentation 페이지를 참조하십시오.

https://aws.amazon.com/premiumsupport/knowledge-center/  
https://aws.amazon.com/documentation/ 

I certainly want you to have the best experience while using our services, so please feel free to let me know if you have any other concern in the meantime, it will be an honor to keep on assisting you. On behalf of Amazon Web Services, I wish you Happy Cloud Computing!

 

Best regards,

Merlyn N.

Amazon Web Services

====================================================================
Learn to work with the AWS Cloud. Get started with free online videos and self-paced labs at 
http://aws.amazon.com/training/ 
====================================================================

Amazon Web Services, Inc. and affiliates

 

 

 

뭐 내용을 읽어보니 어쩔 수 없는 것 같다.

 

어쨌든 한달 정도는 공짜인 것 같으니 거기에 위안을 삼아야지...

 

AWS DeepRacer forum 에 갔더니 나처럼 모델 생성시 Error가 난다는 글이 몇개 눈에 띄었다.

아마도 NAT Gateways를 delete 해서 그런게 아닌지...

 

이 문제 때문인지 몰라도 어제부터 아마존에서 뭔가 이슈를 처리하느라 Training과 evaluation 일을 할 수 없다는 메세지가 뜨더라.

 

 

이 investigation이 위 문제와 관련이 있기를 바라고 또 그 결과로 NAT Gateways 비용 청구와 관련해서 좀 더 전향적인 방안이 제시 됐으면 좋겠다.

 

하루 1불씩 무조건 부과되는건 쫌 부담 스럽다.

내가 필요할 때만 해당 서비스를 사용하고 거기에 대한 비용만 지불할 수 있게 해 달라... 아마존아......

  

반응형


반응형

Hands-on Exercise 1: Model Training Using AWS DeepRacer Console

 

This is the first of four exercises that you will encounter in this course. This first exercise guides you through building, training, and evaluating your first RL model using the AWS DeepRacer console. To access the instructions for three of these exercises, download and unzip this course package. For this particular exercise, find and open the relevant PDF file and follow the steps within to complete the exercise.

*Note: This exercise is designed to be completed in your AWS account. AWS DeepRacer is part of AWS Free Tier, so you can get started at no cost. For the first month after sign-up, you are offered a monthly free tier of 10 hours of Amazon SageMaker training and 60 simulation units of Amazon RoboMaker (enough to cover 10 hours of training). If you go beyond those free tier limits, you will accrue additional costs. For more information, see the AWS DeepRacer Pricing page.

 

Hands-on Exercise 1- Model Training Using AWS DeepRacer Console.pdf
0.23MB

 

 

 

 

Hands-on Exercise 2- Advanced Model Training Using AWS DeepRacer Console.pdf
0.25MB

 

 

For feedback, suggestions, or corrections, email us at aws-course-feedback@amazon.com.

 

 

Hands-on Exercise 3- Distributed AWS DeepRacer RL Training using Amazon SageMaker and AWS RoboMaker.pdf
0.46MB

 

SageMakerForDeepRacerSetup.yaml
0.01MB

 

AWSTemplateFormatVersion: "2010-09-09"
Description: 'AWS DeepRacer: Driven by Reinforcement Learning'
Parameters:
  SagemakerInstanceType:
    Description: 'Machine Learning instance type that should be used for Sagemaker Notebook'
    Type: String
    AllowedValues:
      - ml.t2.medium
      - ml.t2.large
      - ml.t2.xlarge
      - ml.t3.medium
      - ml.t3.large
      - ml.t3.xlarge
      - ml.m5.xlarge
    Default: ml.t3.medium
  CreateS3Bucket:
    Description: Create and use a bucket created via this template for model storage
    Default: True
    Type: String
    AllowedValues:
      - True
      - False
    ConstraintDescription: Must be defined at True|False.
  VPCCIDR:
    Description: 'CIDR Block for VPC (Do Not Edit)'
    Type: String
    Default: 10.96.0.0/16
  PUBSUBNETA:
    Description: 'Public Subnet A (Do Not Edit)'
    Type: String
    Default: 10.96.6.0/24
  PUBSUBNETB:
    Description: 'Public Subnet B (Do Not Edit)'
    Type: String
    Default: 10.96.7.0/24
  PUBSUBNETC:
    Description: 'Public Subnet C (Do Not Edit)'
    Type: String
    Default: 10.96.8.0/24
  PUBSUBNETD:
    Description: 'Public Subnet D (Do Not Edit)'
    Type: String
    Default: 10.96.9.0/24
  S3PathPrefix:
    Type: String
    Description: 'Bootstrap resources prefix'
    Default: 'awsu-spl-dev/spl-227'
  S3ResourceBucket:
    Type: String
    Description: 'Bootstrap S3 Bucket'
    Default: 'aws-training'
Conditions:
  CreateS3Bucket: !Equals [ !Ref CreateS3Bucket, True ]
  #  NoCreateS3Bucket: !Equals [ !Ref CreateS3Bucket, False ]
Resources:

# Defining the VPC Used for the sanbox ENV, and notebook instance
  VPC:
    Type: 'AWS::EC2::VPC'
    Properties:
      CidrBlock: !Ref VPCCIDR
      EnableDnsSupport: 'true'
      EnableDnsHostnames: 'true'
      Tags:
        - Key: Name
          Value: 'DeepRacer Sandbox'
# There is a few calls made to public to download supporting resources
  InternetGateway:
    Type: 'AWS::EC2::InternetGateway'
    DependsOn: VPC
    Properties:
      Tags:
        - Key: Name
          Value: 'DeepRacer Sandbox IGW'
# Attached this IGW to the sanbox VPC
  AttachGateway:
    Type: 'AWS::EC2::VPCGatewayAttachment'
    DependsOn:
      - VPC
      - InternetGateway
    Properties:
      VpcId: !Ref VPC
      InternetGatewayId: !Ref InternetGateway
# Default setting in the notebook is to use Public IP address to communicate
# between instances running the simulation, and the instances collecting and
# processing. A NatGW could have been used with added costs, but would allow for
# use of private IP address.

# Found in testing that not all ML instance types may not be deployed or avaliable
# in all AZ's within a given region. We are using the newest instance family of T3
  PublicSubnetA:
    Type: 'AWS::EC2::Subnet'
    DependsOn: VPC
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Ref PUBSUBNETA
      AvailabilityZone: !Select
        - '0'
        - !GetAZs ''
      Tags:
        - Key: Name
          Value: 'Deepracer Sandbox - Public Subnet - A'
  PublicSubnetB:
    Type: 'AWS::EC2::Subnet'
    DependsOn: VPC
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Ref PUBSUBNETB
      AvailabilityZone: !Select
        - '1'
        - !GetAZs ''
      Tags:
        - Key: Name
          Value: 'Deepracer Sandbox Public Subnet - B'
  PublicSubnetC:
    Type: 'AWS::EC2::Subnet'
    DependsOn: VPC
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Ref PUBSUBNETC
      AvailabilityZone: !Select
        - '2'
        - !GetAZs ''
      Tags:
        - Key: Name
          Value: 'Deepracer Sandbox Public Subnet - C'
  PublicSubnetD:
    Type: 'AWS::EC2::Subnet'
    DependsOn: VPC
    Properties:
      VpcId: !Ref VPC
      CidrBlock: !Ref PUBSUBNETD
      AvailabilityZone: !Select
        - '3'
        - !GetAZs ''
      Tags:
        - Key: Name
          Value: 'Deepracer Sandbox Public Subnet - D'
# Define the Public Routing Table
  PublicRouteTable:
    Type: 'AWS::EC2::RouteTable'
    DependsOn:
      - VPC
      - AttachGateway
    Properties:
      VpcId: !Ref VPC
      Tags:
        - Key: Name
          Value: 'Deepracer Sandbox Public Routing Table'
# And add in the default route to 0.0.0.0/0
  PublicRouteIGW:
    Type: 'AWS::EC2::Route'
    DependsOn:
      - PublicRouteTable
      - InternetGateway
    Properties:
      RouteTableId: !Ref PublicRouteTable
      DestinationCidrBlock: 0.0.0.0/0
      GatewayId: !Ref InternetGateway
# Attach the routing table to each of the subnets
  PublicRouteTableAssociationA:
    Type: 'AWS::EC2::SubnetRouteTableAssociation'
    Properties:
      SubnetId: !Ref PublicSubnetA
      RouteTableId: !Ref PublicRouteTable
  PublicRouteTableAssociationB:
    Type: 'AWS::EC2::SubnetRouteTableAssociation'
    Properties:
      SubnetId: !Ref PublicSubnetB
      RouteTableId: !Ref PublicRouteTable
  PublicRouteTableAssociationC:
    Type: 'AWS::EC2::SubnetRouteTableAssociation'
    Properties:
      SubnetId: !Ref PublicSubnetC
      RouteTableId: !Ref PublicRouteTable
  PublicRouteTableAssociationD:
    Type: 'AWS::EC2::SubnetRouteTableAssociation'
    Properties:
      SubnetId: !Ref PublicSubnetD
      RouteTableId: !Ref PublicRouteTable
# Define a S3 endpoint for all the S3 traffic during training
  S3Endpoint:
    Type: AWS::EC2::VPCEndpoint
    Properties:
      VpcId: !Ref VPC
      RouteTableIds:
        - !Ref PublicRouteTable
      ServiceName: !Join
        - ''
        - - com.amazonaws.
          - !Ref 'AWS::Region'
          - .s3
      PolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal: '*'
            Action:
              - 's3:*'
            Resource:
              - '*'
# This exercise is going to need a bucket to store any file generated from training
# There is a conditions to evaluate if the PRAM is true, else this resource would
# not be created.
  SandboxBucket:
    Type: 'AWS::S3::Bucket'
    DeletionPolicy: Retain
    Condition: CreateS3Bucket
    Properties:
      BucketName:
        Fn::Join:
          - "-"
          - - deepracer-trainingexercise
            - Ref: AWS::Region
            - Ref: AWS::AccountId
# Sagemaker is going to be making calls to Robomaker to launch the sim, and
# Sagemaker to launch the training insance. This requries AWS credentals. A
# Principal of sagemaker and robomaker needs to be assiged as both service will
# assuming this role. Default Sagemaker full access and s3 access is needed.
  SageMakerNotebookInstanceRole:
    Type: 'AWS::IAM::Role'
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - sagemaker.amazonaws.com
                - robomaker.amazonaws.com
            Action:
              - 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/AmazonSageMakerFullAccess'
      Path: /
      Policies:
        - PolicyName: DeepRacerPolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: [ 's3:*',
                          'iam:GetRole' ]
                Resource: '*'
# This is how the notebook gets loaded on to sagemaker. There is a zip file with
# with the needed files, and a second http call to pull down the notebook.
# This is only done "OnCreate" - when the sagemaker instance is first deployed
# You can can the script get run "OnStart" (when a sagemaker instance changes
# from a stopped state to a running state). This would automaticlly update file
# to be the latest form source, but could over write changes applied during
# your testing
  SageMakerNotebookInstanceLifecycleConfig:
    Type: 'AWS::SageMaker::NotebookInstanceLifecycleConfig'
    Properties:
  #    OnStart:
  #      - Content:
  #          Fn::Base64:
  #            #!/bin/bash
  #            !Sub |
  #            cd SageMaker
  #            chown ec2-user:ec2-user -R /home/ec2-user/SageMaker

      OnCreate:
        - Content:
            Fn::Base64:
              !Sub |
              cd SageMaker
              curl -O https://us-west-2-${S3ResourceBucket}.s3.amazonaws.com/${S3PathPrefix}/scripts/rl_deepracer_robomaker_coach.ipynb
              curl -O https://us-west-2-${S3ResourceBucket}.s3.amazonaws.com/${S3PathPrefix}/scripts/rl_deepracer_robomaker_coach.zip
              unzip rl_deepracer_robomaker_coach.zip
              chown ec2-user:ec2-user -R /home/ec2-user/SageMaker
# Security Group for sagemaker instance running in this VPC
  SagemakerInstanceSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Sagemaker Security Group
      VpcId: !Ref VPC
      SecurityGroupIngress:
      - IpProtocol: tcp
        FromPort: 1
        ToPort: 65535
        CidrIp: !Ref VPCCIDR
      - IpProtocol: udp
        FromPort: 1
        ToPort: 65535
        CidrIp: !Ref VPCCIDR
      SecurityGroupEgress:
      - IpProtocol: tcp
        FromPort: 1
        ToPort: 65535
        CidrIp: !Ref VPCCIDR
      - IpProtocol: udp
        FromPort: 1
        ToPort: 65535
        CidrIp: !Ref VPCCIDR
# Creating the Sagemaker Notebook Instance
  SageMakerNotebookInstance:
    Type: 'AWS::SageMaker::NotebookInstance'
    Properties:
      #NotebookInstanceName: 'DeepracerSagemakerSandbox'
      NotebookInstanceName: !Join ["-", ["DeepRacerSagemakerSandbox", !Ref "AWS::StackName"]]
      SecurityGroupIds:
        - !GetAtt
          - SagemakerInstanceSecurityGroup
          - GroupId
      InstanceType: !Ref SagemakerInstanceType
      SubnetId: !Ref PublicSubnetA
      Tags:
        - Key: Name
          Value: 'DeepRacer Sandbox'
      LifecycleConfigName: !GetAtt
          - SageMakerNotebookInstanceLifecycleConfig
          - NotebookInstanceLifecycleConfigName
      RoleArn: !GetAtt
          - SageMakerNotebookInstanceRole
          - Arn
Outputs:
  # Display the name of the bucekt that was created from this CFN Stack
    ModelBucket:
      Condition: CreateS3Bucket
      Value: !Ref SandboxBucket
  # URL to get to the Sagemaker UI, and find the Jupyter button. 
    SagemakerNotebook:
      Value:
        !Sub |
          https://console.aws.amazon.com/sagemaker/home?region=${AWS::Region}#/notebook-instances/${SageMakerNotebookInstance.NotebookInstanceName}

반응형
이전 1 2 다음