블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 받은 트랙백

글 보관함

calendar

          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31            


Udemy - AWS Machine Learning, AI, SageMaker - With Python




Summary


Section 3 Linear Regression


23. Summary


Squared Loss Function is parabolic in nature. It has an important property of not only telling us the loss at a given weight, but also tells us which way to go to minimize loss


Gradient Descent optimization algorithm uses loss function to move the weights of all the features and iteratively adjusts the weights until optimal value is reached


Batch Gradient Descent predicts y value for all training examples and then adjusts the value of weights based on loss. It can converge much slower when training set is very large. Training set order does not matter as every single example in the training set is considered before making adjustments.


Stochastic Gradient Descent predicts y value for next training example and immediately adjusts the value of weights. It can converge faster when training set is very large. Training set should be random order otherwise model will not learn correctly. AWS ML uses stochastic Gradient Descent



Section 4 AWS - Linear Regression Models


27. Concept - How to evaluate regression model accuracy?


Linear Regression - Residuals


- AWS ML Console provides a Histogram that shows distribution of examples that were over estimated and underestimated and to what extent

- Available as "explore model performance" option under Evaluation -> Summary

- Ideal: Over/Under estimation should be a normal curve centered at 0.

- Structural Issue: When you observe vast majority of example falling into one side. Adding more relevant features can help remedy the situation.


31. Model Performance Summary and Conclusion

RMSE (Root Mean Square Error) is the evaluation metric for Linear Regression. Smaller the value of RMSE, better the predictive accuracy of model. Perfect model would have RMSE of 0.


To prepare data for AWS ML, it requires data to be in

1. CSV file available in S3

2. AWS Redshift Datawarehouse

3. AWS Relational Database Service (RDS) MySQL DB


Batch Prediction results are stored by AWS ML to S3 in the specified bucket


We pulled the data from S3 to local folder and plotted them


Based on the distribution of data, AWS ML suggests a recipe for processing data.

In case of numeric features, it may suggest binning the data instead of treating a raw numeric

For this example, treating x as numeric provided best results






Section 5 Adding Features To Improve Model


35. Summary

1. Underfitting occurs when model does not accurately capture relationship between features and target

2. Underfitting would cause large training errors and evaluation errors   

  Training RMSE: 385.1816, Evaluation RMSE: 257.8979, Baseline RMSE: 437.311

3. Evaluation Summary - Prediction overestimation and underestimation histogram provided by AWS ML console provides important clues on how the model is behaving, under-estimation and over-estimation needs to be balanced and centered around 0

4. Box plot also highlights distribution differences between predicted and actual-negatives

5. To address underfitting, add higher order polynomials or more relevant features to capture complex relationship

  Training RMSE: 132.2032, Evaluation RMSE: 63.6847, Baseline RMSE: 437.311

6. When working with datasets containing 100s or even 1000s of features, it important to rely on these metrics and distribution to gain insight into model performance



Section 6 Normalization


37. Concept: Normalization to smoothen magnitude differences


Normalization Transformation (Numeric)

- When there are very large differences in magnitude of features, features that have large magnitude can dominate Model

- Example : We saw this in Quadratic Extra Features dataset

- Normalization is a process of transforming features to have a mean of 0 and variance of 1. This will ensure all features have similar scale

  : Feature normalized = (feature - mean) / (sigma)

    where,

mean = mean of feature x

sigma = standard deviation of feature x

  : Usage : normalize (numericFeature0

- Optimization algorithm may also converge faster with normalized features compared to features that have very large scale differences


39. Summary

1. Having lot of features and complex features can help improve prediction accuracy

2. When feature ranges are orders of magnitude different, it can dominate the outcome. Normalization is a process of transforming features to have a mean of 0 and  variance of 1. This will ensure all feature have similar scale.

3. Without Normalization:

  Training RMSE: 83973.66, Evaluation RMSE: 158260.62, Baseline RMSE: 437.31

4. With Normalization:

  Training RMSE: 72.35, Evaluation RMSE: 51.7387, Baseline RMSE: 437.31

5. Normalization can be easily enabled using AWS ML Transformation recipes



Section 7 Adding Complex Features

46. Summary

Adding polynomial features allows us fit more complex shapes


To add polynomial features that combines all input features, use sci-kit module library. Anaconda includes these modules by default


We saw good performance with degree 4 and any additional feature may bring incremental improvement, but with added complexity of managing features.


1. Model Degree 1 Training RMSE:0.5063, Evaluation RMSE:0.4308, Baseline RMSE:0.689

2. Model Degree 4 Training RMSE:0.2563, Evaluation RMSE:0.1493, Baseline RMSE:0.689

3. Model Degree 15 Training RMSE:0.2984, Evaluation RMSE:0.1222, Baseline RMSE:0.689




Section 8 Kaggle Bike Hourly Rental Prediction


50. Linear Regression Wrapup and Summary


AWS ML - Linear Regression

* Linear Model

* Gradient Descent and Stochastic Gradient Descent

* Squared Error Loss Function

* AWS ML Training, Evaluation, Interactive Prediction, Batch Prediction

* Prediction Quality

  - RMSE

  - Residual Histograms

* Data visualization

* Normalization

* Higher order polynomials



Section 9 - Logistic Regression Models


Image result for Linear vs. Logistic regression model

In short: Linear Regression gives continuous output. i.e. any value between a range of values. ... GLM(Generalized linear models) does not assume a linear relationship between dependent and independent variables. However, it assumes a linear relationship between link function and independent variables in logit model.


https://stackoverflow.com/questions/12146914/what-is-the-difference-between-linear-regression-and-logistic-regression


https://techdifferences.com/difference-between-linear-and-logistic-regression.html


58. Summary

Binary Classifier : Predicts positive class probability of an observation

Logistic or Sigmod function has an important property where output is between 0 and 1 for any input. This output is used by binary classifiers as a probability of positive class.

True Positive - Samples that are actual-positives correctly predicted as positive

True Negative - Samples that are actual-negatives correctly predicted as negative

False Negative - Sampleas that are actual-positives incorrectly predicted as negative

False Positive - Samples that are actual-negatives incorrectly predicted as positive

Logistic Loss Function is parabolic in nature. It has an important property of not only telling us the loss at a given weight. but also tells us which way to go to minimize loss

Gradient Descent optimization algorithm uses loss function to move the weights of all the features and iteratively adjusts the weights until optimal value is reached

Batch Gradient Descent predicts y value for all training examples and then adjusts the value of weights based on loss. It can converge much slower when training set is very large. Training set order does not matter as every single example in the training set is considered before making adjustments.

Stochastic Gradient Descent predicts y value for next training example and immediately adjusts the value of weights. It can converge faster when training set is very large. Training set should be random order otherwise model will not learn correctly. AWS ML uses stochastic Gradient Descent




Section 10 


62

Classification Metrics

True Positive = count(model correctly predicted positives). Students who passed exam correctly classified as pass.

True Negative = count (model correctly predicted negatives). Students who failed exam correctly classfied as fail.

False Positive = count (model misclassified negative as positive). Students who failed exam incorrectly classified as pass.

False Negative = count (model misclassified positive as negative). Students who passes exam incorrectly classified as fail.


* True Positive Rate, Recall, Probability of detection - Fraction of positive predicted correctly. larger value indicates better predictive accuracy.


TPR = True Positive / Actual Positive


* False Positive Rate, probability of false alarm - Fraction of negative predicted as positive. Smaller value indicates better predictive accuracy


FPR = False Positive / Actual Negative


* Precision - Fraction of true positive among all predicted positive. Larger value indicates better predictive accuracy


Precision = True Positive / Predicted Positive


* Accuracy - Fraction of correct predictions. Larger value indicates better predictive accuracy

Accuracy = True Positive + True Negative / negative

where, n is the number of examples


63. 

Classification Insights with AWS Histograms


Histogram - Binary Classifier


* Positive and Negative histograms

* Interactive tool to test effect of various cut-off thresholds

* Ability to save a threshold for the model

* Available under :

Model -> Evaluation Summary -> Explore Performance

https://docs.aws.amazon.com/machine-learning/latest/dg/binary-model-insights.html


64

Concept: AUC Metric


AUC - Binary Classifer

* Area Under Curve(AUC) metric - 0 to 1. Larger Value indicates better predictive accuracy

* AUC is the area of a curve formed by plotting True Positive Rate against False positive Rate at different cut-off thresholds

* AUC value of 0.5 is baseline and it is considered random-guess

* AUC closer to 1 indicates better predictive accuracy

* AUC closer to 0 indicates model has learned correct patterns, but flipping predictions (0's are predicted as 1's and vice versa).


69 Summary

For Binary Classification, Area Under Curve (AUC) is the evaluation metric to assess the quality of model


AUC is the area of a curve formed by plotting True Positive Rate against False Positive Rate at different cut-off thresholds.

* AUC metric closer to 1 indicates highly accurate prediction

* AUC metric 0.5 indicates random guess - Baseline AUC

* AUC metric closer to 0 indicates model has learned from the features, but predictions are flipped


Advanced Metrics

* Accuracy - Fraction of correct predictions. Larger value indicates better predictive accuracy

* True Positive Rate - Probability of detection. Out of all positive, how many were correctly predicted as positive. Larger value indicates better predictive accuracy

* False Positive Rate _ Probability of false alarm. Smaller value indicates better predictive accuracy. Out of all negatives, how many were incorrectly predicted as positive.

* Precision - out of all predicted as positive, how man are true positive? Larger value indicates better predictive accuracy.



Section 11

72 Concept: Evaluating Predictive Quality of Multiclass Classifiers


Multi-class metrics


* F1 Score - Harmonic mean of Recall and Precision. Larger F1 Score indicates better predictive accuracy. Binary Metic


F1 Score = 2.Precision.Recall / Precision + Recall


* Average F1 Score - For multi-class problems, average of class wise F1 score is used for accessing predictive quality


* Baseline F1 Score - Hypothentical model that predicts only most frequent class as the answer 


Concept: Confusion Matrix To Evaluating Predictive Quality


Multiclass - Metrics - Confusion Matrix


* Accessible from Model -> Evaluation Summary -> Explore Model performance


* Concise table that shows percentage and count of correct classification and incorrect classifications


* Visual look at model performance


* Up to 10 classes are shown - listed from most frequent to least frequent


* For more than 10 classes, first 9 most freq. classes are shown and 10th class will collapse rest of the classes and mark as otherwise

* Option to download confusion matrix


* https://docs.aws.amazon.com/machine-learning/latest/dg/multiclass-model-insights.html



77. Summary


Multi-Class Evaluation Metric

1. F1 Score is a binary classification metic. It is harmonic mean of precision and recall

F1 Score = 2 X Precision X Recall / (Precision + Recall)

Higher F1 Score reflects better predictive accuracy

2. Multi-Class Evaluation

Average of class wise F1 Score

3. Baseline F1 Score = Hypothetical model that predicts only most frequent class as the answer

4. Visualization - Confusion Matrix - Available on AWS ML Console

Matrix. Rows = true class. Columns = predicted class

Cell color - diagonal indicates true class prediction %

Cell color - non-diagonal indicates incorrect prediction %

Last column is F1 score for that class. Last but one column is true class distribution

Last row is predicted class distribution

Upto 10 classes are shown - listed from most frequent to least frequent

For more than 10 classes, first 9 most freq. classes are shown and 10th class will collapse rest of the classes and mark as otherwise

You can download the confusion matrix thru url-Explore Performance page under Evaluations


Prediction Summary

1. Eval with default recipe settings. Average F1 score: 0.905

2. Eval with numeric recipe settings: Average F1 score: 0.827

3. Batch prediction Results (predict all 150 example outcome)

  a. With default recipe settings: Average F1 Score: 0.973

  b. With numeric recipe settings:Average F1 Score: 0.78

4. Classification was better with binning. Versicolor classification was impacted when numeric setting was used

5. Higher F1 Score implies better prediction accuracy



Section 12 Text Based Classification with AWS Twitter Dataset


78. AWS Twitter Feed Classification for Customer Service

https://github.com/aws-samples/machine-learning-samples/tree/master/social-media 


79. Lab: Train, Evaluate Model and Assess Predictive Quality, 80. Lab: Interactive Prediction with AWS

- Practice


81. Logistic Regression Summary


AWS ML - Logistic Regression

- Linear Model

- Logistic/Sigmoid Function to produce a probability

- Stochastic Gradient Descent

- Logistic Loss function

- AWS ML Training, Evaluation, Interactive Prediction, Batch Prediction

- Prediction Quality 

  : TPR

  : FPR

  : Accuracy

  :Prediction

  : AUC Metrics

  : F1 Score

  : Average F1 Score for multi-class

- Data visualization

- Text Processing

- Normalization

- Higher order polynomials




Section 13


82. Recipe Overview


Recipe

- Recipe is a set of instructions for pre-processing data

- Recipe is a JSON like document

- Consists of three parts: Groups, Assignments, Outputs

- Groups - Groups are collection of features for which similar transformation needs to be applied

  : Built-in Group : ALL_TEXT, ALL_NUMERIC, ALL_CATEGORICAL, ALL_BINARY

  : Define your own groups

- Assignments - Enable creation of new features derived from existing ones

- Outputs - List features used for learning process and optionally apply transformation


Recipe is automatically applied to training data, evaluation data and to data submitted through real-time and batch prediction APIs


83. Recepe Example


84. Text Transformation


* N-gram Text Transformation

- Tokenizes input text and combines them into a slideing window of n-words, where n is specified in the recipe

- Usage: ngram(textFeature, n), where n is the size

- By default all text data is tokenized with n=1

  : Example: "Customer requests urgent response" text is tokenized as {"Customer", "requests", "urgent", "response"}

- With n=2, it generates one word and two word combinations

  : {"Customer requests", "requests urgent", urgent response", "Customer", "requests", "urgent", "response"}

- N-grams of size up to 10 is supported

- N-grams breaks texts at whitespace. Punctuations will be considered part of word

- You can remove punctuations using no_punct transformation


* OSB Text Transformation

- Orthogonal Spare Bigram (OSB) Transformation provides more word combinations compared to n-gram

- Usage: osb(textFeature, size)

- Puts one underscore to indicate word boundary as well as every word skipped

- For example (AWS Document provided sample).

https://docs.aws.amazon.com/ko_kr/machine-learning/latest/dg/data-transformations-reference.html

Text: "The quick brown fox jumps over the lazy dog". osb(text,4)

WINDOW,{OSB GENERATED}

"The quick brown fox", {The_quick, The__brown, The___fox}

"quick brown fox jumps", {quick_brown, quick__fox, quick___jumps}

"brown fox jumps over", {brown_fox, brown__jumps, brown___over}

"fox jumps over the", {fox_jumps, fox__over, fox___the}

"jumps over the lazy", {jumps_over, jumps__the, jumps___lazy}

"over the lazy dog", {over_the, over__lazy, over___dog}

"the lazy dog", {the_lazy, the__dog}

"lazy dog", {lazy_dog}


* Lowercase and Punctuation


- Lower Case Transformation converts text to lowercase

  : Usage : lowercase(textFeature)

  : Example: "The Quick Brown Fox Jumps Over the Lazy Dog" ->  "the quick brown fox jumps over the lazy dog"

- Remove punctuation Transformation - removes punctuations at word boundaries

  : Usage: nopunct(textFeature)

  : Example: "Customer Number: 123. Ord-No: AB1235" will be by default tokenized as

    {"Customer","Number:","123.","Ord-No:","AB1235"}

  : With nopunct transformation -> {"Customer","Number","123","ord-No","AB1235"}

  : Note: only prefix, suffix punctuations are removed. Embedded punctuations are not removed "Ord-No"

  

85. Numeric Transformation - Quantile Binning


* Quantile Binning Transformation (Numeric)

- Used for converting a numeric value into a categorical bin number

- Usage: quantile_bin(numericFeature, n), where n is the number of bins

- AWS ML uses this information to establish n bins of equal size based on the distribution of all values of the specified numeric feature.

- It then maps incoming numericFeature value to corresponding bin and outputs bin number as categorical value

- AWS ML Recommendation: In some cases, relationship between numeric variable and target is not linear...... binning might be usful in those scenarios

- We actually saw where binning improved predictive accuracy with Iris Dataset


86. Numeric Transformation - Normalization


Normalization Transformation (Numeric)


- When there are very large differences in magnitude of features, features that have large magnitude can dominate Model

- Example: We saw this in Quadratic Extra Features dataset

- Normalization is a process of transforming features to have a mean of 0 and variance of 1. This will enshre all features have similar scale.

  : Example Feature normalized = (feature - mean)/(sigma)

    where,

mean = mean of feature x

sigma = standard deviation of feature x

  : Usage normalize(numericFeature)

- Optimization algorithm may also converge faster with normalized features compared to features that have very large scale differences


87. Cartesian Product Transformation - Categorical and Text


* Cartesian Product Transformation (Categorical, Text)

- Cartesian transformation generates permutations of two or more text and categorical input variables

- For example: Season and Hour combined may have stronger influence on bike rentals. Instead of treating these two as separate features, we can create a new feature Season_Hour that will combine these values.

- Usage cartesian(feature1, feature2)

- Combined features may be able to more accurately related the target attribute

Table


88. Summary

Data Transformation



Section: 14 Hyper Parameters, Model Optimization and Lifecycle

Hyper Parameters allow you to control the model building process and quality


90. Data Rearrangement, Maximum model Size, passes, Shuffle Type

Table



93. Improving Model Quality

Optimizing Model

- To improve a model following are some options

  : Add more training examples

  : Add more relevant features

  : Model hyperparameter tuning

- Quality Metrics of Training Data and Evaluation Data can provide important clues to improve model performance



94. Model Maintenance


- Models may need to be periodically rebuilt or updated to 

  : Keep in-sync with new patterns

  : Support new more relevant features

  : Support new class - in multi - class problems

  : Changes in assumptions or distribution of data that was used to train model

  : Changes to cut-off threshold

  Example: Home price changes month to month depending on several factors

- Have a plan to evaluate model with new data periodically. Example: Weekly, Monthly, Quartly

- Models are probabilistic in nature...

  : Binary Class - Provides bestAnswer(1 or 0) and a raw prediction score. Cut-off score is configurable

  : Multi Class - Provides prediction score for each class. It can be interpreted as probability of observation belonging to the class. Class with highest score is the best answer

  : Regression : Provides a score that contains raw numeric prediction of the target attribute.

- When models are changed, predicted results would also change - Quality metrics like AUC, F1 Score, RMSE can be used to determine whether to go ahead with proposed model change


95. AWS Machine Learning System Limits

- AWS ML imposes certain limits to ensure robust and reliable service

- Some are soft limits and can be increased by contacting AWS Customer Service

- Size of each observation: 100KB

- Size of training data: 100GB

- Size of batch prediction input: 1TB (single file limit. can be overcome by creating more batch files)

- No. of records per batch file: 100 million

- No. of variables/features: 10,000

- Throughput per second for realtime prediction: 200 requests/second

- Max Number of classes per multi-class model: 100


96. AWS Machine Learning Pricing


- Data Analysis and Model Building Fee - $0.42 per Hour of building time

  : Number of computer hours required for data analysis, model training and evaluation

  : Depends on size of input data, attributes, types of transformations applied

- Predictions Fees

  : Batch predictions - $0.10/1,000 predictions founded to the nearest 1,000

  : Real-time predictions - $0.0001 per prediction + Capacity reservation charge of $0.001 per hour for each 10MB provisioned for your model

  

Section 15 Integration of AWS Machine Learning With Your Application


98. Introduction


AWS ML Integration


- Speed!

  : Turn your ideas into cool products in a matter of days

  : Traditional approach would require months

  

- Highly scalable, secure service with redundancy built-in

  : Scale automatically to train model with very large datasets

  : Scale automatically to support high volume prediction needs

  : Real-time prediction with capacity reservation

  : Secure - Limit access to Authenticated and Authorized services and users

  

- Server less!


- Software Integration

  : AWS Machine Learning - Complete functionality is accessible through SDK and Command Line Interfaces

  : Model building and Prediction can be fully automated using SDK

  : AWS SDKs in multiple lanuages - Python, Java, .NET, Javascript, Ruby, C++, ....

  : Complete list languages https://aws.amazon.com/tools/ 

  

99. Integration Scenarios


Connectivity and Security Options


- You Data Center -> AWS ML Cloud Service  

  : Security: Key Based Authentication + IAM Policy + SSL

- AWS Hosted Application -> AWS ML Cloud Service

  : Security : IAM Role + SSL

- Browser, Apps on Phone -> AWS ML Cloud Service

  : Option 1: AWS Cognito Based Authentication + IAM Role + SSL

  : Choice of authentication providers: Cognito, Google, Amazon, Facebook, Twitter, OpenID, Customer

  : Option 2 : Key Based Authentication + IAM Policy + SSL


100. Security using IAM


Users belong to AWS root account. Cognito Users are application level users. Application belongs to AWS root account.









Join us from the comfort of your home or office. Register for the AWS re:Invent live streams.

Live Stream Agenda** 
Tuesday Night Live | Tuesday, Nov. 28 | 8:00 PM – 9:30 PM PT
Peter DeSantis, VP, AWS Global Infrastructure

Keynote | Wednesday, Nov. 29 | 8:00 AM – 10:30 AM PT 
Andy Jassy, CEO, Amazon Web Services 

Keynote | Thursday, Nov. 30 | 8:30 AM – 10:30 AM PT 
Werner Vogels, Chief Technology Officer, Amazon.com 

Additional Coverage on Twitch 
For additional coverage and to join in the conversation, tune in to www.twitch.tv/aws, where we will be live streaming keynote recaps, interviews with AWS experts and community leaders, and demos of new product launches. Share your thoughts and get your questions answered during these interactive live streams. For more information on the additional live stream coverage visit https://aws.amazon.com/twitch.

AWS re:Invent Live Stream Sponsored by Intel
Intel invents technology making amazing experiences possible. Powering devices & the cloud you depend on. Visit aws.amazon.com/intel for more information. 




Sincerely,

The Amazon Web Services Team

**Please note that the live stream will be in English only.












* Bash Script


Auto execute scripts when create instance

- Enter script in Advanced Details text box when you create an instance



In this case, system will execute all these scripts when create the instance.

#!/bin/bash

yum update -y

yum install httpd -y

service httpd start

checkconfig httpd on

cd /var/www/html

aws s3 cp s3://mywebsitebucket-changsoo/index.html /var/www/html


: Update system, install Apache, start httpd server and copy index.html from s3 to /var/www/html folder of the Instance



* Install PHP and create php page


Enter scripts below to Advanced Details when you create an instance


#!/bin/bash

yum update -y

yum install httpd24 php56 git -y

service httpd start

chkconfig httpd on

cd /var/www/html

echo "<?php phpinfo();?>" > test.php

git clone https://github.com/acloudguru/s3


navigate to 'public IP address'/test.php in your browser then you can see PHP INFO page




Access to the server through Terminal


1178578-C02NW6G1G3QD:AWS_SSH changsoopark$ ssh ec2-user@54.89.219.112 -i EC2KeyPair.pem.txt 


.....................


[root@ip-172-31-80-161 ec2-user]# cd /var/www/html

[root@ip-172-31-80-161 html]# ls -l

total 8

drwxr-xr-x 3 root root 4096 Oct 12 23:52 s3

-rw-r--r-- 1 root root   19 Oct 12 23:52 test.php

[root@ip-172-31-80-161 html]# 


==> there is a test.php and downloaded s3 folder from cloudguru's GitHub repository



https://docs.aws.amazon.com/aws-sdk-php/v3/guide/getting-started/installation.html



Installing via Composer

Using Composer is the recommended way to install the AWS SDK for PHP. Composer is a dependency management tool for PHP that allows you to declare the dependencies your project needs and installs them into your project.

  1. Install Composer

    curl -sS https://getcomposer.org/installer | php
    
  2. Run the Composer command to install the latest stable version of the SDK:

    php composer.phar require aws/aws-sdk-php
    
  3. Require Composer's autoloader:

    <?php
    require 'vendor/autoload.php';
    

You can find out more on how to install Composer, configure autoloading, and other best-practices for defining dependencies at getcomposer.org.


[root@ip-172-31-80-161 html]# pwd

/var/www/html

[root@ip-172-31-80-161 html]# curl -sS https://getcomposer.org/installer | php

curl: (35) Network file descriptor is not connected

[root@ip-172-31-80-161 html]# curl -sS https://getcomposer.org/installer | php

All settings correct for using Composer

Downloading...


Composer (version 1.5.2) successfully installed to: /var/www/html/composer.phar

Use it: php composer.phar


[root@ip-172-31-80-161 html]# php composer.phar require aws/aws-sdk-php

Do not run Composer as root/super user! See https://getcomposer.org/root for details

Using version ^3.36 for aws/aws-sdk-php

./composer.json has been created

Loading composer repositories with package information

Updating dependencies (including require-dev)

Package operations: 6 installs, 0 updates, 0 removals

  - Installing mtdowling/jmespath.php (2.4.0): Downloading (100%)         

  - Installing psr/http-message (1.0.1): Downloading (100%)         

  - Installing guzzlehttp/psr7 (1.4.2): Downloading (100%)         

  - Installing guzzlehttp/promises (v1.3.1): Downloading (100%)         

  - Installing guzzlehttp/guzzle (6.3.0): Downloading (100%)         

  - Installing aws/aws-sdk-php (3.36.26): Downloading (100%)         

guzzlehttp/guzzle suggests installing psr/log (Required for using the Log middleware)

aws/aws-sdk-php suggests installing aws/aws-php-sns-message-validator (To validate incoming SNS notifications)

aws/aws-sdk-php suggests installing doctrine/cache (To use the DoctrineCacheAdapter)

Writing lock file

Generating autoload files

[root@ip-172-31-80-161 html]# ls -l

total 1844

-rw-r--r-- 1 root root      62 Oct 13 00:04 composer.json

-rw-r--r-- 1 root root   12973 Oct 13 00:04 composer.lock

-rwxr-xr-x 1 root root 1852323 Oct 13 00:04 composer.phar

drwxr-xr-x 3 root root    4096 Oct 12 23:52 s3

-rw-r--r-- 1 root root      19 Oct 12 23:52 test.php

drwxr-xr-x 8 root root    4096 Oct 13 00:04 vendor

[root@ip-172-31-80-161 html]# cd vendor

[root@ip-172-31-80-161 vendor]# ls -l

total 28

-rw-r--r-- 1 root root  178 Oct 13 00:04 autoload.php

drwxr-xr-x 3 root root 4096 Oct 13 00:04 aws

drwxr-xr-x 2 root root 4096 Oct 13 00:04 bin

drwxr-xr-x 2 root root 4096 Oct 13 00:04 composer

drwxr-xr-x 5 root root 4096 Oct 13 00:04 guzzlehttp

drwxr-xr-x 3 root root 4096 Oct 13 00:04 mtdowling

drwxr-xr-x 3 root root 4096 Oct 13 00:04 psr

[root@ip-172-31-80-161 vendor]# vi autoload.php


<?php


// autoload.php @generated by Composer


require_once __DIR__ . '/composer/autoload_real.php';


return ComposerAutoloaderInit818e4cd87569a511144599b49f2b1fed::getLoader();






* Using the PHP to access to S3



[root@ip-172-31-80-161 s3]# ls -l

total 24

-rw-r--r-- 1 root root 796 Oct 12 23:52 cleanup.php

-rw-r--r-- 1 root root 195 Oct 12 23:52 connecttoaws.php

-rw-r--r-- 1 root root 666 Oct 12 23:52 createbucket.php

-rw-r--r-- 1 root root 993 Oct 12 23:52 createfile.php

-rw-r--r-- 1 root root 735 Oct 12 23:52 readfile.php

-rw-r--r-- 1 root root 193 Oct 12 23:52 README.md

[root@ip-172-31-80-161 s3]# vi createbucket.php 


<?php

//copyright 2015 - A Cloud Guru.


//connection string

include 'connecttoaws.php';


// Create a unique bucket name

$bucket = uniqid("acloudguru", true);


// Create our bucket using our unique bucket name

$result = $client->createBucket(array(

    'Bucket' => $bucket

));


//HTML to Create our webpage

echo "<h1 align=\"center\">Hello Cloud Guru!</h1>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<h2 align=\"center\">You have successfully created a bucket called {$bucket}</h2>";

echo "<div align=\"center\"><a href=\"createfile.php?bucket=$bucket\">Click Here to Continue</a></div>";

?>


[root@ip-172-31-80-161 s3]# vi connecttoaws.php 


<?php

// Include the SDK using the Composer autoloader

require '/var/www/html/vendor/autoload.php';

$client = new Aws\S3\S3Client([

    'version' => 'latest',

    'region'  => 'us-east-1'

]);

?>


[root@ip-172-31-80-161 s3]# vi createfile.php 


<?php

//Copyright 2015 A Cloud Guru


//Connection string

include 'connecttoaws.php';


/*

Files in Amazon S3 are called "objects" and are stored in buckets. A specific

object is referred to by its key (or name) and holds data. In this file

we create an object called acloudguru.txt that contains the data

'Hello Cloud Gurus!'

and we upload/put it into our newly created bucket.

*/


//get the bucket name

$bucket = $_GET["bucket"];


//create the file name

$key = 'cloudguru.txt';


//put the file and data in our bucket

$result = $client->putObject(array(

    'Bucket' => $bucket,

    'Key'    => $key,

    'Body'   => "Hello Cloud Gurus!"

));


//HTML to create our webpage

echo "<h2 align=\"center\">File - $key has been successfully uploaded to $bucket</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<div align = \"center\"><a href=\"readfile.php?bucket=$bucket&key=$key\">Click Here To Read Your File</a></div>";

?>


[root@ip-172-31-80-161 s3]# vi readfile.php 


<?php

//connection string

include 'connecttoaws.php';


//code to get our bucket and key names

$bucket = $_GET["bucket"];

$key = $_GET["key"];


//code to read the file on S3

$result = $client->getObject(array(

    'Bucket' => $bucket,

    'Key'    => $key

));

$data = $result['Body'];


//HTML to create our webpage

echo "<h2 align=\"center\">The Bucket is $bucket</h2>";

echo "<h2 align=\"center\">The Object's name is $key</h2>";

echo "<h2 align=\"center\">The Data in the object is $data</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<div align = \"center\"><a href=\"cleanup.php?bucket=$bucket&key=$key\">Click Here To Remove Files & Bucket</a></div>";

?>

                      

[root@ip-172-31-80-161 s3]# vi cleanup.php 


<?php

//Connection String

include'connecttoaws.php';


//Code to get our bucketname and file name

$bucket = $_GET["bucket"];

$key = $_GET["key"];


//buckets cannot be deleted unless they are empty

//Code to delete our object

$result = $client->deleteObject(array(

    'Bucket' => $bucket,

    'Key'    => $key

));


//code to tell user the file has been deleted.

echo "<h2 align=\"center\">Object $key successfully deleted.</h2>";


//Code to delete our bucket

$result = $client->deleteBucket(array(

    'Bucket' => $bucket

));


//code to create our webpage.

echo "<h2 align=\"center\">Bucket $bucket successfully deleted.</h2>";

echo "<div align = \"center\"><img src=\"https://acloud.guru/images/logo-small-optimised.png\"></img></div>";

echo "<h2 align=\"center\">Good Bye Cloud Gurus!</h2>";

?>


http://54.89.219.112/s3/createbucket.php




acloudguru.... buckets are created in my S3. 

Click on the Link.





cloudguru.txt file has been uploaded to the bucket in S3.

Click on the Link.




Click on the Link.







The bucket has been removed.










* Instance Metadata and User Data


curl http://169.254.169.254/latest/meta-data/ (*****)


How to get the public IP address (Exam *****)


[root@ip-172-31-80-161 s3]# curl http://169.254.169.254/latest/meta-data/

ami-id

ami-launch-index

ami-manifest-path

block-device-mapping/

hostname

iam/

instance-action

instance-id

instance-type

local-hostname

local-ipv4

mac

metrics/

network/

placement/

profile

public-hostname

public-ipv4

public-keys/

reservation-id

security-groups

[root@ip-172-31-80-161 s3]# curl http://169.254.169.254/latest/meta-data/public-ipv4

54.89.219.112[root@ip-172-31-80-161 s3]# 


[root@ip-172-31-80-161 s3]# yum install httpd php php-mysql


[root@ip-172-31-80-161 s3]# service httpd start

Starting httpd: 

[root@ip-172-31-80-161 s3]# yum install git


[root@ip-172-31-80-161 s3]# cd /var/www/html

[root@ip-172-31-80-161 html]# git clone https://github.com/acloudguru/metadata

Cloning into 'metadata'...

remote: Counting objects: 9, done.

remote: Total 9 (delta 0), reused 0 (delta 0), pack-reused 9

Unpacking objects: 100% (9/9), done.



[root@ip-172-31-80-161 s3]# cd /var/www/html

[root@ip-172-31-80-161 html]# git clone https://github.com/acloudguru/metadata

Cloning into 'metadata'...

remote: Counting objects: 9, done.

remote: Total 9 (delta 0), reused 0 (delta 0), pack-reused 9

Unpacking objects: 100% (9/9), done.

[root@ip-172-31-80-161 html]# ls -l

total 1848

-rw-r--r-- 1 root root      62 Oct 13 00:04 composer.json

-rw-r--r-- 1 root root   12973 Oct 13 00:04 composer.lock

-rwxr-xr-x 1 root root 1852323 Oct 13 00:04 composer.phar

drwxr-xr-x 3 root root    4096 Oct 13 00:34 metadata

drwxr-xr-x 3 root root    4096 Oct 13 00:15 s3

-rw-r--r-- 1 root root      19 Oct 12 23:52 test.php

drwxr-xr-x 8 root root    4096 Oct 13 00:08 vendor

[root@ip-172-31-80-161 html]# cd metadata

[root@ip-172-31-80-161 metadata]# ls -l

total 8

-rw-r--r-- 1 root root 676 Oct 13 00:34 curlexample.php

-rw-r--r-- 1 root root  11 Oct 13 00:34 README.md

[root@ip-172-31-80-161 metadata]# vi curlexample.php


<?php

        // create curl resource

        $ch = curl_init();

        $publicip = "http://169.254.169.254/latest/meta-data/public-ipv4";


        // set url

        curl_setopt($ch, CURLOPT_URL, "$publicip");


        //return the transfer as a string

        curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);


        // $output contains the output string

        $output = curl_exec($ch);


        // close curl resource to free up system resources

        curl_close($ch);


        // close curl resource to free up system resources

        curl_close($ch);


        //Get the public IP address

        echo "The public IP address for your EC2 instance is $output";

?>


Open a Web Browser
http://54.89.219.112/metadata/curlexample.php






Copy from alexa.design/standout

PDF file : 

GuideStandoutSkillFinal.pdf








Seven Qualities of Top-Performing Alexa Skills



Browse through the Alexa Skills Store, and you’ll see the innovations of our developer community on display. Our public catalog features more than 25,000 skills that enable a rich variety of scenarios including hands-free smart device control, on-demand content delivery, immersive adventure games, and more. You’ve created natural and engaging voice experiences that are delighting customer. And you’ve pushed the boundaries of what’s possible with voice to redefine how your customers interact with technology.



Now that you can earn money for eligible skills that drive the highest customer engagement, we know engagement is top of mind for many of you. To help you maximize the impact of your work, we analyzed our skill selection from the customers’ perspective. What makes a skill engaging for customers? And what keeps customers coming back over time? To find out, we examined the skills that see the highest consistent customer engagement. And we learned that these top performers share seven common qualities:



1. The skill makes a task faster and easier with voice 

2. The skill has an intuitive and memorable name
3. The skill sets clear expectations on what it can do 

4. The skill minimizes friction

5. The skill surprises and delights customers 

6. The skill delivers fresh content
7. The skill is consistently reliable



In this guide, we will dive deeper into each quality and provide guidance on how you can incorporate it into your skill. We will also share exemplary skills that you can explore and model after. Leverage these insights to build standout skills that your customers will love.





1 The Skill Makes a Task Faster and 


Easier with Voice



When designing a new skill, make sure it has a clear customer benefit. Your skill should make a task faster and easier with voice. The skill should offer a more convenient user experience than existing methods, be it a light switch or a smartphone app.



Smart home skills, especially those that control multiple smart devices, are a great example of an existing experience made better with voice. They take a known workflow that involves multiple applications and simplify the steps into a single voice command, making the tasks both faster and easier. These skills offer a clear value to the customer.



When choosing your voice project, start with the purpose, or what customers want to accomplish. Then determine the capabilities of your skill and the benefits of using the skill over other options. Make sure your skill has a clear purpose before you start building. Skills that seamlessly integrate into a customer’s routine and provide value are especially popular.



Customers love The Dog Feeder skill because it helps simplify a daily task. Customers simply say, 

“Alexa, ask the dog if we fed her,” and Alexa shares when the dog last ate, giving families an easy way to manage a shared task. The skill addresses a need in the customers’ daily routine and provides value.



If you’re adapting an existing experience for voice, take a voice-first approach to designing and building your skill. In other words, avoid taking a visual experience or an app-first experience and simply adding voice to it. Instead, reimagine the interaction and figure out how to make it faster and easier with voice. Unless you offer an option that is twice as easy as what’s already available, customers don’t have an incentive to leave the UX they already know and adopt a new habit.





2 The Skill Has an Intuitive and 


Memorable Name



Once you’ve determined your skill’s purpose, give it a great name. Your skill’s name should help customers easily discover, understand, and remember your skill. If your skill name is longer and more difficult to say than a similar skill, you’ll risk losing customers—even if your skill offers more functionality. Remember, customers prefer voice because it’s our most natural form of interaction. So be sure to give your skill a name that’s natural to say and easy to grasp.



Take, for example, Daily Affirmation. The skill provides a new uplifting thought every day—just as the name suggests. For skills that deliver fresh content, specifying how often you’ll update the content tells the customers when to come back for more.

 

Even skills with more complex customer offerings can have a simple and memorable name.

The Magic Door is an interactive adventure game that takes customers through a magic door and into an enchanted forest. The name hints at many aspects of this sophisticated skill and is also easy to remember.


Once you’ve got an idea for your skill’s name, say the invocation name out loud, just as a customer would. See if it’s intuitive and easy to say. Let’s take the example of the Sleep and Relaxation Sounds skill. The customer will say something like:



You can see that the invocation name speaks to the value of the skill, flows within the context, and will be easy to remember at bedtime.


Beta testers (or even friends or colleagues) can also help grade the strength of your skill’s name. Ask them what they expect the skill to do based on the name alone. Use their responses to determine whether your skill name clearly articulates your skill’s capabilities and value. After your skill is published, read the customer reviews to identify any gaps between the skill name and the skill experience.



3 The Skill Sets Clear Expectations on 


What It Can Do



When customers first invoke your skill, aim to provide just the right amount of information so customers know how to move forward. Provide not enough information, and customers won’t know what to do. Provide too much, and customers will get overwhelmed and leave. Finding the right balance is key to enabling your customers to seamlessly interact with your skill.



Then, when your users come back for a second visit, offer a different, abbreviated welcome. Since you’ve already introduced yourself, you can dive right in and pick up where you left off, just like you would with another person. When we talk to each other, our first conversation and our tenth conversation are quite different. That’s because we grow more familiar with each other, and our conversations gain context from previous talks. The same should hold true for your skill’s interaction with your customers.



For every interaction, keep Alexa’s responses concise so that your users stay engaged and can easily follow along. Put your skill’s responses to the one-breath test. Read aloud what you’ve written at a conversational pace. If you can say it all in one breath, the length is likely good. If you need to take a breath, consider reducing the length. For a response that includes successive ideas such as steps in a task, read each idea separately. While the entire response may require more than one breath, make sure your response requires breaths between, not during, ideas.



Once you’ve designed your skill, test your skill to make sure it works as you intended. Watch beta testers and customers try to use your skill and see whether you’ve presented the right amount of information to successfully guide them through the interaction.



 Learn more: Voice Design Guide: What Alexa Says

Try: Set Clear Expectations Using Our Code Sample





4 The Skill Minimizes Friction


As you add capabilities to your skill, make sure you don’t introduce unnecessary pain points or friction. Think through the entire interaction flow, and ensure your customers will know how to navigate from one step to the next. Remove any ambiguity that may hinder your customers from moving forward and getting what they’re looking for.



One way to minimize friction is to only add account linking when you truly need it. Account linking provides authentication when you need to associate the identity of the Alexa user with a user in your system. It’s a useful way to collect information that is very difficult to accurately recognize via voice, like email addresses (which often contain homophones like “one” and “1”). But account linking can also introduce friction for customers when they enable a skill as it prevents the process from being completed seamlessly via voice. Therefore, it should only be used when necessary, specifically when the resulting customer value offsets the risk of friction.



If your skill simply needs to persist data between sessions, account linking is not strictly required. The userID attribute provided with the request will identify the same user across sessions unless the customer disables and re-enables your skill. 


Some information, like physical address, is now available via the permissions framework grows, account-linking flows should be limited to authentication scenarios only and not personalization. If you use account linking in your skill, be sure to follow best practices to minimize friction and ensure a smooth customer experience.



Learn more : 10 Tips for Successfully Adding Account Linking to Your Alexa Skill




5 The Skill Surprises and Delights 


Customers


In mobile and web design, it’s important to provide a consistent customer experience every time. Layout, color schemes, and names always stay the same so users don’t have to relearn the UI with each visit. But with voice, it’s important to have variety. People may not mind scanning the same web page times over, but no one wants to have the same conversation time and again.



You can introduce variety throughout your skill to keep the interaction fresh. Think of all the different ways Alexa can welcome your customers, or the many ways Alexa can say “OK” (think: “Got it,” “Thanks,” “Sounds good,” “Great,” and so on). You can use these opportunities to inject variety, color, and humor to your skill. You can even prepare clever responses to customers’ requests for features your skill doesn’t yet support. By seizing these opportunities, you can make your interactions feel more natural, conversational, and even memorable. 



You can also build engagement over time by remembering what your users were doing last.

Storing data in Amazon DynamoDB allows you to add this memory and context to your skill.

Persistence allows you to pause games or guide users through a step-by-step process like creating a recipe, tackling a DIY project, or a playing a game. For example, a game skill with memory enables customers to pause, come back, and pick up right where they left off.






6 The Skill Regularly Provides Fresh 



Content


As we’ve mentioned, customers expect variety in voice interactions. So it’s no surprise that skills that provide fresh content drive more regular usage over time. Fresh content gives customers a reason to return to your skill over time, and when they do, they are rewarded with something new.



This is especially true of flash briefing skills, which are built around the premise of delivering fresh content. When flash briefing skills don’t update as promised, customers tend to leave negative reviews.



However, the value of this quality doesn’t just apply to flash briefing skills; other skills should also get regular content updates. For example, fact skills and trivia skills that don’t evolve over time to offer new facts or questions don’t tend to see consistent engagement. Users may love the experi- ence you’ve built, but if your skill never evolves beyond a set of limited choices, they won’t have reason to keep coming back.


The Jeopardy! skill is a model example of a skill that entices customers with fresh content. The skill serves up six new clues every weekday, giving fans reason to return five times a week.

When building your skill, establish a content workflow that enables you to quickly and easily add new content to your skill. One way to do this is to house your content in a database instead of hardcoding it into your skill to enable fast updates. Once you’ve set up a workflow, adhere to a schedule to make continued updates to your skill. Find ways to add fresh content and continue delighting your customers over time.



Try: Keep Your Customers Engaged with Dynamic Content





7 The Skill Is Consistently Reliable

Even the most compelling and delightful voice experience won’t gain traction if it isn’t available whenever customers ask. To ensure your skill is consistently reliable, configure a professional-grade backend for your skill.


Amazon Web Services offers several solutions that will help you improve the user experience and ensure dependability of your skill as it gains users and handles more intricate content. Try Amazon CloudFront to cache dynamic content and files that require heavy-lifting. This will improve your response time and provide better deliverability.

 

If you’ve built a top-notch skill, it will likely get noticed and highlighted in the Alexa Skills Store. So be sure your backend can support your skill’s moment in the spotlight. Your backend should be able to scale properly to ensure high availability during high-traffic scenarios. If you’re using Amazon DynamoDB, set your tables capacity for reads and writes per second to be much higher than your expected peak throughput. If your skill launches multiple AWS Lambda functions per skill invocation, check to see whether you are nearing the limits for function invocations. If you’re getting close, you can request a limit increase to ensure scalability. To set alarms for unforeseen scenarios, you can use Lambda’s built-in functionality to output logs to Amazon CloudWatch and trigger alarms based on the vents in those logs. 

 


 Once your skill is live, you can use Amazon QuickSight to visualize analytics you track in Amazon Redshift. You can see how your skill is performing, fix user experiences that don't resonate, and double down on what works to make your skill even more impactful.


AWS Promo Credits: If you incur AWS charges related to your skill, you can apply for AWS promotional credits. Once you’ve published a skill, apply to receive a $100 AWS promotional credit and an additional $100 per month in credit.

Apply now. 



Learn more: 5 Ways to Level Up Your Skill with AWS




Build Engaging Skills Your Customers Will Love


Whether you’re building a new skill or upgrading an existing skill, follow these tips to put your best skill forward. By building engaging voice experiences, you can reach and delight customers through tens of millions of devices with Alexa. And you can also enrich your skills over time to grow your expertise in voice design and evolve from a hobbyist to a professional.



It also pays to build highly engaging skills. Every month, developers can earn money for eligible skills that drive the highest customer engagement in seven eligible skill categories. 


Learn more and start building your next skill today.

 






Alexa Skills Kit


The Alexa Skills Kit is a collection of self-service APIs, tools, documentation, and code samples that makes it fast and easy for you to add skills to Alexa. With ASK, you can leverage Amazon’s knowledge and pioneering work in the field of voice design.



Additional Resources


Voice Design Guide

Documentation

Shortcut Start a Skill Now









It is Free !!!!!!


Download 'Lottery Numbers' Mobile app from Amazon App Store.


Or you can see the app in PC or Laptop of yours.


Go to amazon.com and search by 'Lottery Numbers'.






Get Jackpot numbers for Mega Million and/or Powerball.




It's Free !!!!!!!!!


Make enable Yut nori game helper skill on your Alexa devices


Direct URL to go to Yut nori game helper page in Amazon : Yut nori game helper

And then make it enabled by clicking on Enable button on the page.


or you can navigate to the page from amazon.com as below.


https://www.amazon.com/








I've got published my first Alexa Skill. (Yut Nori Game Helper)



I've referred to these materials to develop it.


New Alexa Skills Kit Template for Developers: GameHelper

- Let's Build A Skill - Game Helper -Skills Service

- https://en.wikipedia.org/wiki/Yut

- https://modernseoul.org/2011/11/26/how-to-play-yut-nori-윷놀이/

- http://owlworksllc.com/featured-game/yut-nori-the-korean-game-of-sticks/



I've modified Game Helper Template to develop this skill.

Please see the first article above. 

GameHelper Template



Note : Details to create Skill Service and Skill interface are in my previous article


[Alexa Skill] How to develop Alexa Skill. - A place where Changsoo lived.


These are steps what I've done to create the Alexa Skill 'Yut Nori game helper'.


1. Develop the skill


: Modify index.js and intent.json file




: Zip all files except speechAssets folder






2. Create Skill Service


: Navigate to aws.amazon.com

: Lambda -> Create New Lambda function -> Alexa Skills Kit -> Configure function

: In configure function page - Enter Name and Description -> Upload the zip file - Choose existing Role (myAlexaRole)


Now Skill Service has been created. You can get ARN for the function now.



3. Create Skill Interface


: Navigate to developer.amazon.com

: ALEXA -> Alexa Skills Kit -> Add a New Skill -> Create a New Alexa Skill

: Enter Name and Invocation Name in Create a New Alexa Skill page -> Click on Next button

: Copy and paste your intent.json into Intent Schema 

: Create Custom Slot Types

: Enter Custom Utterances -> Click on Next button




: Build the Skill

: Copy the ARN from Lambda function in aws.amazon.com and paste it to Endpoint section in  developer.amazon.com

: Test the skill





4. Publish the Skill


Submitting an Alexa Skill for Certification


: Publishing Information


-> Enter Category, Sub Category, Testing Instructions, Descriptions, Example Phrases, Keywords and Icons.

: Privacy & Compliance


-> Submit for Certification





5. Reviewing by Alexa team


- You will get this kind of feedback when your skill submission is failed the certification process.


You need to fix the issues and re-submit for the certification.

You will get Congratulation email once it is passed.

The email is first image of this article.




이 글은 Korean Traditional board game "Yutnori" 윷놀이를 소개하는 Alexa Skill에서 사용할 image들을 모아 놓은 글 입니다.




  

              Malpan (말판)





  • 1 flat side up = 1 piece moves 1 space. This is called a pig or Doh.





  • 2 flat sides up = 1 piece moves 2 spaces. This is called a dog or Ge.





  • 3 flat sides up = 1 piece moves 3 spaces. This is called a sheep or Girl.


  • 4 flat sides up = 1 piece moves 4 spaces. This is called a cow or Yut.



Alexa Skill 에서 사용할 description 들


// Used when user asks for help.

var HelpMessage = "Here are some things you can say to learn more about Yutnori - the Korean traditional board game: How do I play? Tell me about the Mo. How do I win? Give me a tip!";


// Used when the skill is opened.

var welcomeMessage = "Yutnori Helper. You can ask how to set up the game, about an individual piece, what the rules are, how to win, or for a Yutnori tip. Which will it be?";


// We are using this after every piece of information has been read out by Alexa, however this is also used as a repromt string.

var repromtMessage = "Here are some things you can ask for: ";


// Describing the game overview.

var overviewMessage = "The word Yut translates to sticks or lots and Nori means game. Four tokens or Mal are moved around a cloth game board called a Malpan. Instead of modern dice, 4 Yut sticks, each having one flat side and one round side are cast to control the movement of the tokens. Grab the 4 sticks in one hand and let them fall onto the table. Once you get the hang of it, reading the throw of the sticks to determine your move is easy. Here’s how it works.";


// Describing how to set the game up.

var setupMessage = "The game board \"Malpan\" is made of cloth. The layout of the 29 spaces or stations on the board can be either in a square or round configuration. The round geometry of the game board is symbolic of the cosmos. The large station in the center denotes the North Star with the 28 stations around it signifying the constellations. At path intersections, there are 5 stations that are larger than the rest. When a token lands on one of these stations, a player may take a short cut through the center of the board. ";


// Describing how to win the game.

var goalMessage = "The game is played between two partners or two teams who play in turns, sometimes it is played with more teams. There is no limit in the number of participants in a game, which means that the game can be played by a considerable group. When played with large groups it is not uncommon for some group members never to cast the sticks: they still participate discussing the strategy. The game is won by the team who brings all their mals home first, that is complete the course with all their mals. A course is completed if a mal passes the station where the game is started (cham-meoki). Landing on cham-meoki is no finish, but any score going \"beyond\" this station completes a home run. Yut is often played for three or more wins. ";


var repromts = [

    "Which would you like to know: a Yutnori tip or how to win?",

    "Tell me the name of an individual Mal to learn more about it.",

    "Which would you like to know: how to set up the game or what the rules are?"];


var pieces = [

    { key: "doh", imageLarge: "", imageSmall: "", value: "Doh. 1 flat side up. 1 piece moves 1 space. This is called a pig or Do." },

    { key: "ge", imageLarge: "", imageSmall: "", value: "Ge. 2 flat sides up. 1 piece moves 2 spaces. This is called a dog or Gae." },

    { key: "girl", imageLarge: "", imageSmall: "", value: "Girl. 3 flat sides up. 1 piece moves 3 spaces. This is called a sheep or Girl." },

    { key: "yut", imageLarge: "", imageSmall: "", value: "Yut. 4 flat sides up. 1 piece moves 4 spaces. This is called a cow or Yut." },

    { key: "mo", imageLarge: "", imageSmall: "", value: "Mo. 4 round sides up. 1 piece moves 5 spaces. This is called a horse or Mo." },

];



// To handle different names of the Mals
function GetPiece(value) {
    //console.log("in GetPiece");
    switch (value.toLowerCase().trim()) {
        case "do":
        case "doh":
        case "pig":
            return "Doh";
        case "gae":
        case "Ge":
        case "Ke":
        case "dog":
            return "Ge";
        case "Geol":
        case "Geul":
        case "Girl":
        case "sheep":
            return "Girl";
        case "yut":
        case "cow":
            return "Yut";
        case "horse":
        case "Mo":
        return "Mo";
        default:
            return "Mo";
    }
}








Here I am going to develop an Alexa skill for my own, customized based on 3 articles below.  


1. Fact Skill Tutorial

2. Fact Skill Blog

3. Section 2 of Alexa - A Free Introduction in Cloud Guru


Alexa will answer about the place(s) where I lived.


You can get the source codes of Alexa - A Free Introduction here.

https://github.com/ACloudGuru/alexacourse


I am going to modify index.js file of 1_spaceGeek.


* Modify var FACTS


* Modify AMAZON.HelpIntent


* Modify handleNewFactRequest() function


* zip the AlexaSkill.js and index.js as placeLived.zip



We are now starting to develop 'Amazon Skill' in earnest.

Amazon Skill development can be divided into two parts: skill service and skill interface.







* Skill Service



first access to aws.amazon.com to develop Skill Service.




1. Create a Role


IAM -> Roles -> Create New Role 



Select AWS Lambda.

Select AWSLambdaBasicExecutionRole and Click on Next Step button.




Enter any name for the Role and Click on Create Role button.




Now you have created a new role.




Now I am going to create a Lambda function.


Click on the cube image on top left to go to landing page of AWS services. and Click on Lambda -> Create function button.



Click on Author from scratch button.




Select Alexa Skills Kit for the Lambda



Click on Next button.



Enter Name and Description. And then select Upload a .ZIP file from Code entry type dropdown menu.




Upload the placeLived.zip file.

Next step is Select Choose an existing role from Role dropdown menu.

And select the created Role (placeLivedRole) from Existing role dropdown menu.



And then Click on Next button -> Create function button.



I've created a Lambda function for my skill.


Now it's done for Skill Service development.

I am going to develop the Skill Interface part.

The ARN information will be using when develop the Skill Interface.





* Skill Interface


To develop Skill Interface, Go to developer.amazon.com.


Go to ALEXA tab and Select Get Started button in Alexa Skills Kit.




Click on Add a New Skill button.





Enter Name and Invocation and Click on Next button.




Copy and paste this code to Intent Schema.

(This code is in IntentSchema.json of the source files.)


{

  "intents": [

    {

      "intent": "GetNewFactIntent"

    },

    {

      "intent": "AMAZON.HelpIntent"

    },

    {

      "intent": "AMAZON.StopIntent"

    },

    {

      "intent": "AMAZON.CancelIntent"

    }

  ]

 }


Enter Sample Utterances




Click on Save button. - It will build this skill.

Click on Next button once it is built.


Now I need to copy ARN from aws.amazon.com and paste it in to Global Fields in developer.amazon.com.

So I can make a connection between Skill Service and Skill Interface of mine.


Select AWS Lamda ARN -> North America and paste it into North America text field.



Click on Save and Next button.


Now it is completed.


To test, enter any utterance and click on Ask Places Lived button.














Create ARN (Amazon Resource Name)


aws.amazon.com -> Go to Lambda service -> Create a Lambda function -> Select the Region as N. Virginia






Coding a code in Code tab

Configure Programming Language, Role etc. in Configuration tab

Set Alexa Skills Kit in Triggers tab





The ARN on top right of the screen will be used to configure the Alexa Skill in developer.amazon.com





developer.amazon.com - ALEXA - Alexa Skills Kit




Add a New Skill Button




Complete all steps in left menu pannel


1. Skill Information

2. Interaction Model (Builder BETA)

3. Configuration

4. Test

5. Publishing Information

6. Privacy & Compliance





1. Skill Information 

  : Enter Name and Invocation Name

2. Interaction Model 

  : Set Customer Intents - Sample Utterances

  : Build the App (Save and Build)

3. Configuration

  : Service Endpoint Type - AWS Lambda ARN, Select a geographical region

  : Copy ARN from asw.amazon.com and paste it to the text field under North America in the Configuration page

4. Test

  : Alexa Start Session

  : Type command in Enter Utterance text field and Click on the Button

  : Click on Listen button




  


  

You can test it at https://echosim.io.

get more sample skills from github.com/alexa

이전 1 2 다음