블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 받은 트랙백

글 보관함

calendar

  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30        


 

7-day-western-caribbean-cruise

 

https://www.carnival.com/itinerary/7-day-western-caribbean-cruise/miami/magic/7-days/zw0

 

7 Day Western Caribbean Cruise From Miami, FL | Carnival Cruise Line

Book a 7 Day Western Caribbean Cruise From Miami, FL today at Carnival.com aboard the Carnival Magic. Stops in Miami, Grand Cayman, Mahogany Bay, Belize, Cozumel.

www.carnival.com

마이애미 - 그랜드 케이먼 (영국령) - 마호가니 베이 (온두라스) - 벨리즈 - 코즈멜 (멕시코) - 마이애미

 


Udemy - AWS Machine Learning, AI, SageMaker - With Python




Summary


Section 3 Linear Regression


23. Summary


Squared Loss Function is parabolic in nature. It has an important property of not only telling us the loss at a given weight, but also tells us which way to go to minimize loss


Gradient Descent optimization algorithm uses loss function to move the weights of all the features and iteratively adjusts the weights until optimal value is reached


Batch Gradient Descent predicts y value for all training examples and then adjusts the value of weights based on loss. It can converge much slower when training set is very large. Training set order does not matter as every single example in the training set is considered before making adjustments.


Stochastic Gradient Descent predicts y value for next training example and immediately adjusts the value of weights. It can converge faster when training set is very large. Training set should be random order otherwise model will not learn correctly. AWS ML uses stochastic Gradient Descent



Section 4 AWS - Linear Regression Models


27. Concept - How to evaluate regression model accuracy?


Linear Regression - Residuals


- AWS ML Console provides a Histogram that shows distribution of examples that were over estimated and underestimated and to what extent

- Available as "explore model performance" option under Evaluation -> Summary

- Ideal: Over/Under estimation should be a normal curve centered at 0.

- Structural Issue: When you observe vast majority of example falling into one side. Adding more relevant features can help remedy the situation.


31. Model Performance Summary and Conclusion

RMSE (Root Mean Square Error) is the evaluation metric for Linear Regression. Smaller the value of RMSE, better the predictive accuracy of model. Perfect model would have RMSE of 0.


To prepare data for AWS ML, it requires data to be in

1. CSV file available in S3

2. AWS Redshift Datawarehouse

3. AWS Relational Database Service (RDS) MySQL DB


Batch Prediction results are stored by AWS ML to S3 in the specified bucket


We pulled the data from S3 to local folder and plotted them


Based on the distribution of data, AWS ML suggests a recipe for processing data.

In case of numeric features, it may suggest binning the data instead of treating a raw numeric

For this example, treating x as numeric provided best results






Section 5 Adding Features To Improve Model


35. Summary

1. Underfitting occurs when model does not accurately capture relationship between features and target

2. Underfitting would cause large training errors and evaluation errors   

  Training RMSE: 385.1816, Evaluation RMSE: 257.8979, Baseline RMSE: 437.311

3. Evaluation Summary - Prediction overestimation and underestimation histogram provided by AWS ML console provides important clues on how the model is behaving, under-estimation and over-estimation needs to be balanced and centered around 0

4. Box plot also highlights distribution differences between predicted and actual-negatives

5. To address underfitting, add higher order polynomials or more relevant features to capture complex relationship

  Training RMSE: 132.2032, Evaluation RMSE: 63.6847, Baseline RMSE: 437.311

6. When working with datasets containing 100s or even 1000s of features, it important to rely on these metrics and distribution to gain insight into model performance



Section 6 Normalization


37. Concept: Normalization to smoothen magnitude differences


Normalization Transformation (Numeric)

- When there are very large differences in magnitude of features, features that have large magnitude can dominate Model

- Example : We saw this in Quadratic Extra Features dataset

- Normalization is a process of transforming features to have a mean of 0 and variance of 1. This will ensure all features have similar scale

  : Feature normalized = (feature - mean) / (sigma)

    where,

mean = mean of feature x

sigma = standard deviation of feature x

  : Usage : normalize (numericFeature0

- Optimization algorithm may also converge faster with normalized features compared to features that have very large scale differences


39. Summary

1. Having lot of features and complex features can help improve prediction accuracy

2. When feature ranges are orders of magnitude different, it can dominate the outcome. Normalization is a process of transforming features to have a mean of 0 and  variance of 1. This will ensure all feature have similar scale.

3. Without Normalization:

  Training RMSE: 83973.66, Evaluation RMSE: 158260.62, Baseline RMSE: 437.31

4. With Normalization:

  Training RMSE: 72.35, Evaluation RMSE: 51.7387, Baseline RMSE: 437.31

5. Normalization can be easily enabled using AWS ML Transformation recipes



Section 7 Adding Complex Features

46. Summary

Adding polynomial features allows us fit more complex shapes


To add polynomial features that combines all input features, use sci-kit module library. Anaconda includes these modules by default


We saw good performance with degree 4 and any additional feature may bring incremental improvement, but with added complexity of managing features.


1. Model Degree 1 Training RMSE:0.5063, Evaluation RMSE:0.4308, Baseline RMSE:0.689

2. Model Degree 4 Training RMSE:0.2563, Evaluation RMSE:0.1493, Baseline RMSE:0.689

3. Model Degree 15 Training RMSE:0.2984, Evaluation RMSE:0.1222, Baseline RMSE:0.689




Section 8 Kaggle Bike Hourly Rental Prediction


50. Linear Regression Wrapup and Summary


AWS ML - Linear Regression

* Linear Model

* Gradient Descent and Stochastic Gradient Descent

* Squared Error Loss Function

* AWS ML Training, Evaluation, Interactive Prediction, Batch Prediction

* Prediction Quality

  - RMSE

  - Residual Histograms

* Data visualization

* Normalization

* Higher order polynomials



Section 9 - Logistic Regression Models


Image result for Linear vs. Logistic regression model

In short: Linear Regression gives continuous output. i.e. any value between a range of values. ... GLM(Generalized linear models) does not assume a linear relationship between dependent and independent variables. However, it assumes a linear relationship between link function and independent variables in logit model.


https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression


https://techdifferences.com/difference-between-linear-and-logistic-regression.html


58. Summary

Binary Classifier : Predicts positive class probability of an observation

Logistic or Sigmod function has an important property where output is between 0 and 1 for any input. This output is used by binary classifiers as a probability of positive class.

True Positive - Samples that are actual-positives correctly predicted as positive

True Negative - Samples that are actual-negatives correctly predicted as negative

False Negative - Sampleas that are actual-positives incorrectly predicted as negative

False Positive - Samples that are actual-negatives incorrectly predicted as positive

Logistic Loss Function is parabolic in nature. It has an important property of not only telling us the loss at a given weight. but also tells us which way to go to minimize loss

Gradient Descent optimization algorithm uses loss function to move the weights of all the features and iteratively adjusts the weights until optimal value is reached

Batch Gradient Descent predicts y value for all training examples and then adjusts the value of weights based on loss. It can converge much slower when training set is very large. Training set order does not matter as every single example in the training set is considered before making adjustments.

Stochastic Gradient Descent predicts y value for next training example and immediately adjusts the value of weights. It can converge faster when training set is very large. Training set should be random order otherwise model will not learn correctly. AWS ML uses stochastic Gradient Descent




Section 10 


62

Classification Metrics

True Positive = count(model correctly predicted positives). Students who passed exam correctly classified as pass.

True Negative = count (model correctly predicted negatives). Students who failed exam correctly classfied as fail.

False Positive = count (model misclassified negative as positive). Students who failed exam incorrectly classified as pass.

False Negative = count (model misclassified positive as negative). Students who passes exam incorrectly classified as fail.


* True Positive Rate, Recall, Probability of detection - Fraction of positive predicted correctly. larger value indicates better predictive accuracy.


TPR = True Positive / Actual Positive


* False Positive Rate, probability of false alarm - Fraction of negative predicted as positive. Smaller value indicates better predictive accuracy


FPR = False Positive / Actual Negative


* Precision - Fraction of true positive among all predicted positive. Larger value indicates better predictive accuracy


Precision = True Positive / Predicted Positive


* Accuracy - Fraction of correct predictions. Larger value indicates better predictive accuracy

Accuracy = True Positive + True Negative / negative

where, n is the number of examples


63. 

Classification Insights with AWS Histograms


Histogram - Binary Classifier


* Positive and Negative histograms

* Interactive tool to test effect of various cut-off thresholds

* Ability to save a threshold for the model

* Available under :

Model -> Evaluation Summary -> Explore Performance

https://docs.aws.amazon.com/machine-learning/latest/dg/binary-model-insights.html


64

Concept: AUC Metric


AUC - Binary Classifer

* Area Under Curve(AUC) metric - 0 to 1. Larger Value indicates better predictive accuracy

* AUC is the area of a curve formed by plotting True Positive Rate against False positive Rate at different cut-off thresholds

* AUC value of 0.5 is baseline and it is considered random-guess

* AUC closer to 1 indicates better predictive accuracy

* AUC closer to 0 indicates model has learned correct patterns, but flipping predictions (0's are predicted as 1's and vice versa).


69 Summary

For Binary Classification, Area Under Curve (AUC) is the evaluation metric to assess the quality of model


AUC is the area of a curve formed by plotting True Positive Rate against False Positive Rate at different cut-off thresholds.

* AUC metric closer to 1 indicates highly accurate prediction

* AUC metric 0.5 indicates random guess - Baseline AUC

* AUC metric closer to 0 indicates model has learned from the features, but predictions are flipped


Advanced Metrics

* Accuracy - Fraction of correct predictions. Larger value indicates better predictive accuracy

* True Positive Rate - Probability of detection. Out of all positive, how many were correctly predicted as positive. Larger value indicates better predictive accuracy

* False Positive Rate _ Probability of false alarm. Smaller value indicates better predictive accuracy. Out of all negatives, how many were incorrectly predicted as positive.

* Precision - out of all predicted as positive, how man are true positive? Larger value indicates better predictive accuracy.



Section 11

72 Concept: Evaluating Predictive Quality of Multiclass Classifiers


Multi-class metrics


* F1 Score - Harmonic mean of Recall and Precision. Larger F1 Score indicates better predictive accuracy. Binary Metic


F1 Score = 2.Precision.Recall / Precision + Recall


* Average F1 Score - For multi-class problems, average of class wise F1 score is used for accessing predictive quality


* Baseline F1 Score - Hypothentical model that predicts only most frequent class as the answer 


Concept: Confusion Matrix To Evaluating Predictive Quality


Multiclass - Metrics - Confusion Matrix


* Accessible from Model -> Evaluation Summary -> Explore Model performance


* Concise table that shows percentage and count of correct classification and incorrect classifications


* Visual look at model performance


* Up to 10 classes are shown - listed from most frequent to least frequent


* For more than 10 classes, first 9 most freq. classes are shown and 10th class will collapse rest of the classes and mark as otherwise

* Option to download confusion matrix


* https://docs.aws.amazon.com/machine-learning/latest/dg/multiclass-model-insights.html



77. Summary


Multi-Class Evaluation Metric

1. F1 Score is a binary classification metic. It is harmonic mean of precision and recall

F1 Score = 2 X Precision X Recall / (Precision + Recall)

Higher F1 Score reflects better predictive accuracy

2. Multi-Class Evaluation

Average of class wise F1 Score

3. Baseline F1 Score = Hypothetical model that predicts only most frequent class as the answer

4. Visualization - Confusion Matrix - Available on AWS ML Console

Matrix. Rows = true class. Columns = predicted class

Cell color - diagonal indicates true class prediction %

Cell color - non-diagonal indicates incorrect prediction %

Last column is F1 score for that class. Last but one column is true class distribution

Last row is predicted class distribution

Upto 10 classes are shown - listed from most frequent to least frequent

For more than 10 classes, first 9 most freq. classes are shown and 10th class will collapse rest of the classes and mark as otherwise

You can download the confusion matrix thru url-Explore Performance page under Evaluations


Prediction Summary

1. Eval with default recipe settings. Average F1 score: 0.905

2. Eval with numeric recipe settings: Average F1 score: 0.827

3. Batch prediction Results (predict all 150 example outcome)

  a. With default recipe settings: Average F1 Score: 0.973

  b. With numeric recipe settings:Average F1 Score: 0.78

4. Classification was better with binning. Versicolor classification was impacted when numeric setting was used

5. Higher F1 Score implies better prediction accuracy



Section 12 Text Based Classification with AWS Twitter Dataset


78. AWS Twitter Feed Classification for Customer Service

https://github.com/aws-samples/machine-learning-samples/tree/master/social-media 


79. Lab: Train, Evaluate Model and Assess Predictive Quality, 80. Lab: Interactive Prediction with AWS

- Practice


81. Logistic Regression Summary


AWS ML - Logistic Regression

- Linear Model

- Logistic/Sigmoid Function to produce a probability

- Stochastic Gradient Descent

- Logistic Loss function

- AWS ML Training, Evaluation, Interactive Prediction, Batch Prediction

- Prediction Quality 

  : TPR

  : FPR

  : Accuracy

  :Prediction

  : AUC Metrics

  : F1 Score

  : Average F1 Score for multi-class

- Data visualization

- Text Processing

- Normalization

- Higher order polynomials




Section 13


82. Recipe Overview


Recipe

- Recipe is a set of instructions for pre-processing data

- Recipe is a JSON like document

- Consists of three parts: Groups, Assignments, Outputs

- Groups - Groups are collection of features for which similar transformation needs to be applied

  : Built-in Group : ALL_TEXT, ALL_NUMERIC, ALL_CATEGORICAL, ALL_BINARY

  : Define your own groups

- Assignments - Enable creation of new features derived from existing ones

- Outputs - List features used for learning process and optionally apply transformation


Recipe is automatically applied to training data, evaluation data and to data submitted through real-time and batch prediction APIs


83. Recepe Example


84. Text Transformation


* N-gram Text Transformation

- Tokenizes input text and combines them into a slideing window of n-words, where n is specified in the recipe

- Usage: ngram(textFeature, n), where n is the size

- By default all text data is tokenized with n=1

  : Example: "Customer requests urgent response" text is tokenized as {"Customer", "requests", "urgent", "response"}

- With n=2, it generates one word and two word combinations

  : {"Customer requests", "requests urgent", urgent response", "Customer", "requests", "urgent", "response"}

- N-grams of size up to 10 is supported

- N-grams breaks texts at whitespace. Punctuations will be considered part of word

- You can remove punctuations using no_punct transformation


* OSB Text Transformation

- Orthogonal Spare Bigram (OSB) Transformation provides more word combinations compared to n-gram

- Usage: osb(textFeature, size)

- Puts one underscore to indicate word boundary as well as every word skipped

- For example (AWS Document provided sample).

https://docs.aws.amazon.com/ko_kr/machine-learning/latest/dg/data-transformations-reference.html

Text: "The quick brown fox jumps over the lazy dog". osb(text,4)

WINDOW,{OSB GENERATED}

"The quick brown fox", {The_quick, The__brown, The___fox}

"quick brown fox jumps", {quick_brown, quick__fox, quick___jumps}

"brown fox jumps over", {brown_fox, brown__jumps, brown___over}

"fox jumps over the", {fox_jumps, fox__over, fox___the}

"jumps over the lazy", {jumps_over, jumps__the, jumps___lazy}

"over the lazy dog", {over_the, over__lazy, over___dog}

"the lazy dog", {the_lazy, the__dog}

"lazy dog", {lazy_dog}


* Lowercase and Punctuation


- Lower Case Transformation converts text to lowercase

  : Usage : lowercase(textFeature)

  : Example: "The Quick Brown Fox Jumps Over the Lazy Dog" ->  "the quick brown fox jumps over the lazy dog"

- Remove punctuation Transformation - removes punctuations at word boundaries

  : Usage: nopunct(textFeature)

  : Example: "Customer Number: 123. Ord-No: AB1235" will be by default tokenized as

    {"Customer","Number:","123.","Ord-No:","AB1235"}

  : With nopunct transformation -> {"Customer","Number","123","ord-No","AB1235"}

  : Note: only prefix, suffix punctuations are removed. Embedded punctuations are not removed "Ord-No"

  

85. Numeric Transformation - Quantile Binning


* Quantile Binning Transformation (Numeric)

- Used for converting a numeric value into a categorical bin number

- Usage: quantile_bin(numericFeature, n), where n is the number of bins

- AWS ML uses this information to establish n bins of equal size based on the distribution of all values of the specified numeric feature.

- It then maps incoming numericFeature value to corresponding bin and outputs bin number as categorical value

- AWS ML Recommendation: In some cases, relationship between numeric variable and target is not linear...... binning might be usful in those scenarios

- We actually saw where binning improved predictive accuracy with Iris Dataset


86. Numeric Transformation - Normalization


Normalization Transformation (Numeric)


- When there are very large differences in magnitude of features, features that have large magnitude can dominate Model

- Example: We saw this in Quadratic Extra Features dataset

- Normalization is a process of transforming features to have a mean of 0 and variance of 1. This will enshre all features have similar scale.

  : Example Feature normalized = (feature - mean)/(sigma)

    where,

mean = mean of feature x

sigma = standard deviation of feature x

  : Usage normalize(numericFeature)

- Optimization algorithm may also converge faster with normalized features compared to features that have very large scale differences


87. Cartesian Product Transformation - Categorical and Text


* Cartesian Product Transformation (Categorical, Text)

- Cartesian transformation generates permutations of two or more text and categorical input variables

- For example: Season and Hour combined may have stronger influence on bike rentals. Instead of treating these two as separate features, we can create a new feature Season_Hour that will combine these values.

- Usage cartesian(feature1, feature2)

- Combined features may be able to more accurately related the target attribute

Table


88. Summary

Data Transformation



Section: 14 Hyper Parameters, Model Optimization and Lifecycle

Hyper Parameters allow you to control the model building process and quality


90. Data Rearrangement, Maximum model Size, passes, Shuffle Type

Table



93. Improving Model Quality

Optimizing Model

- To improve a model following are some options

  : Add more training examples

  : Add more relevant features

  : Model hyperparameter tuning

- Quality Metrics of Training Data and Evaluation Data can provide important clues to improve model performance



94. Model Maintenance


- Models may need to be periodically rebuilt or updated to 

  : Keep in-sync with new patterns

  : Support new more relevant features

  : Support new class - in multi - class problems

  : Changes in assumptions or distribution of data that was used to train model

  : Changes to cut-off threshold

  Example: Home price changes month to month depending on several factors

- Have a plan to evaluate model with new data periodically. Example: Weekly, Monthly, Quartly

- Models are probabilistic in nature...

  : Binary Class - Provides bestAnswer(1 or 0) and a raw prediction score. Cut-off score is configurable

  : Multi Class - Provides prediction score for each class. It can be interpreted as probability of observation belonging to the class. Class with highest score is the best answer

  : Regression : Provides a score that contains raw numeric prediction of the target attribute.

- When models are changed, predicted results would also change - Quality metrics like AUC, F1 Score, RMSE can be used to determine whether to go ahead with proposed model change


95. AWS Machine Learning System Limits

- AWS ML imposes certain limits to ensure robust and reliable service

- Some are soft limits and can be increased by contacting AWS Customer Service

- Size of each observation: 100KB

- Size of training data: 100GB

- Size of batch prediction input: 1TB (single file limit. can be overcome by creating more batch files)

- No. of records per batch file: 100 million

- No. of variables/features: 10,000

- Throughput per second for realtime prediction: 200 requests/second

- Max Number of classes per multi-class model: 100


96. AWS Machine Learning Pricing


- Data Analysis and Model Building Fee - $0.42 per Hour of building time

  : Number of computer hours required for data analysis, model training and evaluation

  : Depends on size of input data, attributes, types of transformations applied

- Predictions Fees

  : Batch predictions - $0.10/1,000 predictions founded to the nearest 1,000

  : Real-time predictions - $0.0001 per prediction + Capacity reservation charge of $0.001 per hour for each 10MB provisioned for your model

  

Section 15 Integration of AWS Machine Learning With Your Application


98. Introduction


AWS ML Integration


- Speed!

  : Turn your ideas into cool products in a matter of days

  : Traditional approach would require months

  

- Highly scalable, secure service with redundancy built-in

  : Scale automatically to train model with very large datasets

  : Scale automatically to support high volume prediction needs

  : Real-time prediction with capacity reservation

  : Secure - Limit access to Authenticated and Authorized services and users

  

- Server less!


- Software Integration

  : AWS Machine Learning - Complete functionality is accessible through SDK and Command Line Interfaces

  : Model building and Prediction can be fully automated using SDK

  : AWS SDKs in multiple lanuages - Python, Java, .NET, Javascript, Ruby, C++, ....

  : Complete list languages https://aws.amazon.com/tools/ 

  

99. Integration Scenarios


Connectivity and Security Options


- You Data Center -> AWS ML Cloud Service  

  : Security: Key Based Authentication + IAM Policy + SSL

- AWS Hosted Application -> AWS ML Cloud Service

  : Security : IAM Role + SSL

- Browser, Apps on Phone -> AWS ML Cloud Service

  : Option 1: AWS Cognito Based Authentication + IAM Role + SSL

  : Choice of authentication providers: Cognito, Google, Amazon, Facebook, Twitter, OpenID, Customer

  : Option 2 : Key Based Authentication + IAM Policy + SSL


100. Security using IAM


Users belong to AWS root account. Cognito Users are application level users. Application belongs to AWS root account.




Troubleshooting

2019.01.01 14:01 | Posted by 솔웅








Troubleshooting AWS DeepRacer Issues


Here you'll find troubleshooting tips for frequently asked questions as well as late-coming bug fixes.


여기 자주 묻는 질문과 최근에 수정된 버그 관련한 troubleshooting tips들이 있습니다.






How to Switch AWS DeepRacer Compute Module Power Source from Battery to Power Outlet?


If the compute module battery level is low when you set up your AWS DeepRacer for the first time, follow the steps below to switch the compute power supply from the battery to a power outlet:


처음으로 AWS DeepRacer를 설정할 때 컴퓨팅 모듈 배터리 수준이 낮 으면 다음 단계에 따라 배터리에서 전원 콘센트로 컴퓨팅 전원 공급 장치를 전환하십시오.


  1. Unplug the USB-C cable from the vehicle's compute power port.

    차량의 컴퓨 트 전원 포트에서 USB-C 케이블을 분리합니다.




2. Attach the AC power cord and the USB-C cable to the computer module power adapter (A). Plug the power cord to a power outlet (C) and plug the USB-C cable the vehicle's computer module power port (B).


AC 전원 코드와 USB-C 케이블을 컴퓨터 모듈 전원 어댑터 (A)에 연결하십시오. 전원 코드를 전원 콘센트 (C)에 꽂고 USB-C 케이블을 차량의 컴퓨터 모듈 전원 포트 (B)에 연결합니다.







How to Connect Your AWS DeepRacer to Your Wi-Fi Network?


To use your AWS DeepRacer, you must connect the vehicle to your home or office Wi-Fi network. To connect the vehicle to your Wi-Fi network, follow the steps below:


AWS DeepRacer를 사용하려면 차량을 가정용 또는 사무실 Wi-Fi 네트워크에 연결해야합니다. 차량을 Wi-Fi 네트워크에 연결하려면 다음 단계를 따르십시오.

  1. Have a USB flash drive on hand.
    USB 플래시 드라이브를 준비하십시오.

  2. Plug in the USB flash drive to your computer.
    USB 플래시 드라이브를 컴퓨터에 연결하십시오.

  3. Open a web browser on your computer and navigate tohttps://d1.awsstatic.com/deepracer/wifi-creds.txt to download the Wi-Fi configuration file and copy it to the USB drive.
    컴퓨터에서 웹 브라우저를 열고 https://d1.awsstatic.com/deepracer/wifi-creds.txt로 이동하여 Wi-Fi 구성 파일을 다운로드하고 USB 드라이브로 복사하십시오.

  4. Open the Wi-Fi configuration file in a text editor and type the name (SSID) and password of your Wi-Fi network in the corresponding fields.
    텍스트 편집기에서 Wi-Fi 구성 파일을 열고 해당 필드에 Wi-Fi 네트워크의 이름 (SSID)과 암호를 입력하십시오.

  5. Eject the USB drive from your computer and then plug it into the USB port on the back of the vehicle.
    컴퓨터에서 USB 드라이브를 추출한 다음 차량 후면의 USB 포트에 연결하십시오.





6. Watch the Wi-Fi LED on the vehicle to blink and then to turn blue. The vehicle is now connected to the Wi-Fi network. Unplug the USB drive and skip the next step.
차량의 Wi-Fi LED가 깜박이면 파란색으로 켜십시오. 이제 차량이 Wi-Fi 네트워크에 연결됩니다. USB 드라이브를 분리하고 다음 단계를 건너 뜁니다.


7. If the Wi-Fi LED turns red after blinking, unplug the USB drive from the vehicle. Plug the USB drive back to your computer, verify that the configuration contains the correct network name and password, correct any mistakes or typos, save the file. Repeat Step 5.
Wi-Fi LED가 깜박이면 빨간색으로 켜지면 USB 드라이브를 차량에서 분리하십시오. USB 드라이브를 컴퓨터에 다시 연결하고 구성에 올바른 네트워크 이름과 암호가 포함되어 있는지 확인하고 실수 나 오타를 수정하고 파일을 저장하십시오. 5 단계를 반복하십시오.



How to Charge the AWS DeepRacer Drive Module Battery?


Follow the steps below to charge your AWS DeepRacer drive module battery:


다음 단계에 따라 AWS DeepRacer 드라이브 모듈 배터리를 충전하십시오.

  1. Optionally remove from the vehicle the drive module battery.
    선택적으로 차량에서 드라이브 모듈 배터리를 제거하십시오.

  2. Attach the battery charger to the battery, as depicted as follows:
    다음과 같이 배터리 충전기를 배터리에 연결하십시오.


3. Plug the power cord of battery charger into a power outlet.
배터리 충전기의 전원 코드를 콘센트에 연결하십시오.



How to Charge the AWS DeepRacer Compute Module Battery?


Follow the steps below to charge your AWS DeepRacer compute module battery:


아래 단계에 따라 AWS DeepRacer 컴퓨팅 모듈 배터리를 충전하십시오.


  1. Optionally remove the compute module battery from the vehicle.
    선택적으로 차량에서 컴퓨팅 모듈 배터리를 제거하십시오.

  2. Attach the compute power charger to the compute module battery.
    컴퓨팅 파워 차저를 컴퓨팅 모듈 배터리에 연결하십시오.

  3. Plug the power cord of the compute battery charger into a power outlet.
    컴퓨터 배터리 충전기의 전원 코드를 전원 콘센트에 연결하십시오.




How to Maintain Vehicle's Wi-Fi Connection?


The following troubleshooting guide provides you tips for maintaining your vehicle's connection.


다음 문제 해결 안내서는 차량 연결을 유지 보수하기위한 요령을 제공합니다.



How to Troubleshoot Wi-Fi Connection if Vehicle's Wi-Fi LED Indicator Flashes Blue, Then Turns Red for Two Seconds, and Finally Off?

Wi-Fi 연결 문제를 해결하는 방법 차량의 Wi-Fi LED 표시등이 파란색으로 깜박 인 다음 2 초 동안 빨간색으로 켜고 마지막으로 꺼지는 경우?


Check the following to verify you have the valid Wi-Fi connection settings.


유효한 Wi-Fi 연결 설정을 확인하려면 다음을 확인하십시오.


  • Verify that the USB drive has only one disk partition with only one wifi-creds.txt file on it. If multiple wifi-creds.txt files are found, all of them will be processed in the order they were found, which may lead to unpredictable behavior.
    USB 드라이브에 wifi-creds.txt 파일이 하나만있는 디스크 파티션이 하나만 있는지 확인하십시오. 여러 wifi-creds.txt 파일을 찾으면 모든 파일이 발견 된 순서대로 처리되므로 예기치 않은 동작이 발생할 수 있습니다.

  • Verify the Wi-Fi network's SSID and password are correctly specified in wifi-creds.txt file. An example of this file is shown as follows:

    Wi-Fi 네트워크의 SSID 및 비밀번호가 wifi-creds.txt 파일에 올바르게 지정되어 있는지 확인하십시오. 이 파일의 예는 다음과 같습니다.


###################################################################################
#                                   AWS DeepRacer                                 #
# File name: wifi-creds.txt                                                       #
#                                                                                 # 
# ...                                                                             #
###################################################################################

# Provide your SSID and password below
ssid: ' MyHomeWi-Fi'
password: myWiFiPassword


  • Verify both the field names of ssid and password in the wifi-creds.txt file are in lower case.
    wifi-creds.txt 파일의 ssid 및 password 필드 이름이 모두 소문자인지 확인하십시오.

  • Verify that each of the field name and value is separated by one colon (:). For example. ssid : ' MyHomeWi-Fi'
    각 필드 이름과 값이 콜론 (:)으로 구분되어 있는지 확인하십시오. 예를 들어. ssid : 'MyHomeWi-Fi'

  • Verify that the field value containing a space is enclosed by a pair of single quotes. On Mac, TextEdit or some other text editor shows single quotes as of the '...' form, but not of ‘...’. If the field value does not contain spaces, the value can be without single quotes.
    공백이 포함 된 필드 값이 작은 따옴표로 묶여 있는지 확인하십시오. Mac에서는 TextEdit 또는 다른 텍스트 편집기에서 '...'형식으로 작은 따옴표를 표시하지만 '...'는 표시하지 않습니다. 필드 값에 공백이 없으면 작은 따옴표없이 값을 입력 할 수 있습니다.

What Does It Mean When the Vehicle's Wi-Fi or Power LED Indicator Flashes Blue?

차량의 Wi-Fi 또는 전원 LED 표시등이 파란색으로 깜박일 때의 의미는 무엇입니까?


If the USB drive contains wifi-creds.txt file, the Wi-Fi LED indicator flashes blue while the vehicle is attempting to connect to the Wi-Fi network specified in the file.


USB 드라이브에 wifi-creds.txt 파일이 있으면 차량이 파일에 지정된 Wi-Fi 네트워크에 연결을 시도하는 동안 Wi-Fi LED 표시등이 파란색으로 깜박입니다.


If the USB drive has the models directory, the Power LED flashes blue while the vehicle is attempting to load the model files inside the directory.


USB 드라이브에 models 디렉토리가 있으면 차량이 디렉토리 안에 모델 파일을로드하려고 시도하는 동안 전원 LED가 파란색으로 깜박입니다.


If the USB drive has both the wifi-creds.txt file and the models directory, the vehicle will process the two sequentially, starting with an attempt to connect to Wi-Fi and then loading models.


USB 드라이브에 wifi-creds.txt 파일과 models 디렉토리가 모두있는 경우 차량은 Wi-Fi에 연결 한 다음 모델을로드하려는 시도부터 시작하여 두 개를 순차적으로 처리합니다.


The Wi-Fi LED might also turn red for two seconds if the Wi-Fi connection attempt fails.


Wi-Fi 연결 시도가 실패하면 Wi-Fi LED가 2 초 동안 빨간색으로 변할 수 있습니다.


How Can I Connect to Vehicle's Device Console Using its Hostname?

호스트 이름을 사용하여 차량의 장치 콘솔에 어떻게 연결할 수 있습니까?


When connecting to the vehicle's device console using its hostname, make sure you type: https://hostname.local in the browser, where hostname value (of the AMSS-1234 format) is printed on the bottom of the AWS DeepRacer vehicle. )


호스트 이름을 사용하여 차량의 장치 콘솔에 연결할 때, 브라우저에 https : //hostname.local을 입력하십시오. 여기서 호스트 이름 값 (AMSS-1234 형식)이 AWS DeepRacer 차량 하단에 인쇄됩니다. )


How to Connect to Vehicle's Device Console Using its IP Address?

IP 주소를 사용하여 차량의 장치 콘솔에 연결하는 방법?


To connect to the device console using IP address as shown in the device-status.txt file (found on the USB drive), make sure the following conditions are met.


USB 드라이브에있는 device-status.txt 파일에 표시된대로 IP 주소를 사용하여 장치 콘솔에 연결하려면 다음 조건이 충족되는지 확인하십시오.


  • Check your laptop or mobile devices are in the same network as the AWS DeepRacer vehicle.
    랩톱 또는 모바일 장치가 AWS DeepRacer 차량과 동일한 네트워크에 있는지 확인하십시오.

  • Check if you have connected to any VPN, if so, disconnect first.
    VPN에 연결했는지 확인하십시오. 그렇다면 먼저 연결을 끊으십시오.

  • Try a different Wi-Fi network. For example, turn on personal hotspot on your phone.
    다른 Wi-Fi 네트워크를 사용해보십시오. 예를 들어 휴대 전화에서 개인 핫 스폿을 켭니다.





Document History for AWS DeepRacer Developer Guide

  • API version: latest

  • Latest documentation update: November 28, 2018

ChangeDescriptionDate
AWS DeepRacer Developer GuideInitial release of the documentation to help the AWS DeepRacer user to learn reinforcement learning and explore its applications for autonomous racing, using the AWS DeepRacer console, the AWS RoboMaker simulator, and a AWS DeepRacer scale model vehicle.November 28, 2018


AWS Glossary

Numbers and Symbols | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X, Y, Z









'IoT > AWS DeepRacer' 카테고리의 다른 글

Troubleshooting  (0) 2019.01.01
Drive Your Vehicle  (0) 2018.12.31
Train and Evaluate Models  (0) 2018.12.30
Secure Calling AWS Services  (0) 2018.12.26
Get Started with AWS DeepRacer  (0) 2018.12.25
AWS DeepRacer - AWS account Setup  (0) 2018.12.24
What Is AWS DeepRacer?  (0) 2018.12.23