블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함


Guides - Safety best practices

2023. 1. 10. 22:38 | Posted by 솔웅





An API for accessing new AI models developed by OpenAI


Safety best practices

Use our free Moderation API

OpenAI's Moderation API is free-to-use and can help reduce the frequency of unsafe content in your completions. Alternatively, you may wish to develop your own content filtration system tailored to your use case.


OpenAI의 중재 API(Moderation API)는 무료로 사용할 수 있으며 완료 시 안전하지 않은 콘텐츠의 빈도를 줄이는 데 도움이 될 수 있습니다. 또는 사용 사례에 맞는 고유한 콘텐츠 필터링 시스템을 개발할 수 있습니다.


Adversarial testing

We recommend “red-teaming” your application to ensure it's robust to adversarial input. Test your product over a wide range of inputs and user behaviors, both a representative set and those reflective of someone trying to ‘break' your application. Does it wander off topic? Can someone easily redirect the feature via prompt injections, e.g. “ignore the previous instructions and do this instead”?


적대적 입력에 대해 견고하도록 애플리케이션을 "red-teaming"하는 것이 좋습니다. 대표적인 세트와 애플리케이션을 '파괴'하려는 사람을 반영하는 다양한 입력 및 사용자 행동에 대해 제품을 이렇게 테스트하십시오. 주제에서 벗어났나요? 누군가가 프롬프트 주입을 통해 기능을 쉽게 리디렉션할 수 있습니까? "이전 지침을 무시하고 대신 이렇게 하십시오"?


Human in the loop (HITL)

Wherever possible, we recommend having a human review outputs before they are used in practice. This is especially critical in high-stakes domains, and for code generation. Humans should be aware of the limitations of the system, and have access to any information needed to verify the outputs (for example, if the application summarizes notes, a human should have easy access to the original notes to refer back).


가능하면 실제로 사용하기 전에 출력을 사람이 검토하는 것이 좋습니다. 이것은 고부담(high-stakes) 도메인과 코드 생성에 특히 중요합니다. 사람은 시스템의 한계를 인식하고 출력을 확인하는 데 필요한 모든 정보에 액세스할 수 있어야 합니다(예를 들어 애플리케이션이 메모를 요약하는 경우 사람이 다시 참조하기 위해 원래 메모에 쉽게 액세스할 수 있어야 함).


Rate limits

Limiting the rate of API requests can help prevent against automated and high-volume misuse. Consider a maximum amount of usage by one user in a given time period (day, week, month), with either a hard-cap or a manual review checkpoint. You may wish to set this substantially above the bounds of normal use, such that only misusers are likely to hit it.


API 요청 비율을 제한하면 자동화된 대량 오용을 방지할 수 있습니다. 하드 캡 또는 수동 검토 체크포인트를 사용하여 주어진 기간(일, 주, 월)에 한 사용자의 최대 사용량을 고려합니다. 오용자만 공격할 수 있도록 정상적인 사용 범위보다 상당히 높게 설정할 수 있습니다.


Consider implementing a minimum amount of time that must elapse between API calls by a particular user to reduce chance of automated usage, and limiting the number of IP addresses that can use a single user account concurrently or within a particular time period.


자동 사용 가능성을 줄이기 위해 특정 사용자의 API 호출 사이에 경과해야 하는 최소 시간을 구현하고 단일 사용자 계정을 동시에 또는 특정 기간 내에 사용할 수 있는 IP 주소 수를 제한하는 것을 고려하십시오.


You should exercise caution when providing programmatic access, bulk processing features, and automated social media posting - consider only enabling these for trusted customers.


프로그래밍 방식 액세스, 대량 처리 기능 및 자동화된 소셜 미디어 게시를 제공할 때는 주의해야 합니다. 신뢰할 수 있는 고객에 대해서만 이러한 기능을 활성화하는 것이 좋습니다.


Prompt engineering

“Prompt engineering” can help constrain the topic and tone of output text. This reduces the chance of producing undesired content, even if a user tries to produce it. Providing additional context to the model (such as by giving a few high-quality examples of desired behavior prior to the new input) can make it easier to steer model outputs in desired directions.


"신속한 엔지니어링"은 출력 텍스트의 주제와 어조를 제한하는 데 도움이 될 수 있습니다. 이렇게 하면 사용자가 콘텐츠를 제작하려고 해도 원하지 않는 콘텐츠가 생성될 가능성이 줄어듭니다. 모델에 추가 컨텍스트를 제공하면(예: 새 입력 전에 원하는 동작에 대한 몇 가지 고품질 예를 제공함으로써) 모델 출력을 원하는 방향으로 더 쉽게 조정할 수 있습니다.


“Know your customer” (KYC)

Users should generally need to register and log-in to access your service. Linking this service to an existing account, such as a Gmail, LinkedIn, or Facebook log-in, may help, though may not be appropriate for all use-cases. Requiring a credit card or ID card reduces risk further.


사용자는 일반적으로 서비스에 액세스하기 위해 등록하고 로그인해야 합니다. 이 서비스를 Gmail, LinkedIn 또는 Facebook 로그인과 같은 기존 계정에 연결하면 도움이 될 수 있지만 모든 사용 사례에 적합하지 않을 수 있습니다. 신용카드나 신분증을 요구하면 위험이 더 줄어듭니다.


Constrain user input and limit output tokens

Limiting the amount of text a user can input into the prompt helps avoid prompt injection. Limiting the number of output tokens helps reduce the chance of misuse.


사용자가 프롬프트에 입력할 수 있는 텍스트의 양을 제한하면 프롬프트 삽입을 방지하는 데 도움이 됩니다. 출력 토큰의 수를 제한하면 오용 가능성을 줄이는 데 도움이 됩니다.


Narrowing the ranges of inputs or outputs, especially drawn from trusted sources, reduces the extent of misuse possible within an application.


특히 신뢰할 수 있는 출처에서 가져온 입력 또는 출력 범위를 좁히면 응용 프로그램 내에서 가능한 오용 범위가 줄어듭니다.


Allowing user inputs through validated dropdown fields (e.g., a list of movies on Wikipedia) can be more secure than allowing open-ended text inputs.


검증된 드롭다운 필드(예: Wikipedia의 영화 목록)를 통한 사용자 입력을 허용하는 것이 개방형 텍스트 입력을 허용하는 것보다 더 안전할 수 있습니다.


Returning outputs from a validated set of materials on the backend, where possible, can be safer than returning novel generated content (for instance, routing a customer query to the best-matching existing customer support article, rather than attempting to answer the query from-scratch).


가능한 경우 백엔드에서 검증된 자료 세트의 출력을 반환하는 것이 새로 생성된 콘텐츠를 반환하는 것보다 안전할 수 있습니다(예를 들어 처음부터 질문에 답하려고 시도하지 않고 고객 질문을 가장 일치하는 기존 고객 지원 문서로 라우팅합니다.).


Allow users to report issues

Users should generally have an easily-available method for reporting improper functionality or other concerns about application behavior (listed email address, ticket submission method, etc). This method should be monitored by a human and responded to as appropriate.


사용자는 일반적으로 부적절한 기능 또는 응용 프로그램 동작에 대한 기타 우려 사항(목록에 있는 이메일 주소, 티켓 제출 방법 등)을 보고하기 위해 쉽게 사용할 수 있는 방법이 있어야 합니다. 이 방법은 사람이 모니터링하고 적절하게 대응해야 합니다.


Understand and communicate limitations

From hallucinating inaccurate information, to offensive outputs, to bias, and much more, language models may not be suitable for every use case without significant modifications. Consider whether the model is fit for your purpose, and evaluate the performance of the API on a wide range of potential inputs in order to identify cases where the API's performance might drop. Consider your customer base and the range of inputs that they will be using, and ensure their expectations are calibrated appropriately.


환각적인 부정확한 정보부터 공격적인 결과, 편견 등에 이르기까지 언어 모델은 상당한 수정 없이는 모든 사용 사례에 적합하지 않을 수 있습니다. 모델이 목적에 적합한지 여부를 고려하고 API의 성능이 떨어질 수 있는 경우를 식별하기 위해 광범위한 잠재적 입력에 대한 API의 성능을 평가합니다. 고객 기반과 그들이 사용할 입력 범위를 고려하고 그들의 기대치가 적절하게 보정되었는지 확인하십시오.


Safety and security are very important to us at OpenAI.

If in the course of your development you do notice any safety or security issues with the API or anything else related to OpenAI, please submit these through our Coordinated Vulnerability Disclosure Program.


안전과 보안은 OpenAI에서 우리에게 매우 중요합니다.

개발 과정에서 API 또는 OpenAI와 관련된 모든 안전 또는 보안 문제를 발견한 경우 조정된 취약성 공개 프로그램을 통해 이를 제출하십시오.


End-user IDs

Sending end-user IDs in your requests can be a useful tool to help OpenAI monitor and detect abuse. This allows OpenAI to provide your team with more actionable feedback in the event that we detect any policy violations in your application.


요청에 최종 사용자 ID를 보내는 것은 OpenAI가 남용을 모니터링하고 감지하는 데 도움이 되는 유용한 도구가 될 수 있습니다. 이를 통해 OpenAI는 애플리케이션에서 정책 위반을 감지한 경우 팀에 보다 실행 가능한 피드백을 제공할 수 있습니다.


The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. If you offer a preview of your product to non-logged in users, you can send a session ID instead.


ID는 각 사용자를 고유하게 식별하는 문자열이어야 합니다. 식별 정보를 보내지 않도록 사용자 이름이나 이메일 주소를 해싱하는 것이 좋습니다. 로그인하지 않은 사용자에게 제품 미리보기를 제공하는 경우 대신 세션 ID를 보낼 수 있습니다.


You can include end-user IDs in your API requests via the user parameter as follows:


다음과 같이 사용자 매개변수를 통해 API 요청에 최종 사용자 ID를 포함할 수 있습니다.



response = openai.Completion.create(
  prompt="This is a test",



curl https://api.openai.com/v1/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
  "model": "text-davinci-003",
  "prompt": "This is a test",
  "max_tokens": 5,
  "user": "user123456"





'Open AI > GUIDES' 카테고리의 다른 글

Guide - Error codes  (0) 2023.03.05
Guide - Rate limits  (0) 2023.03.05
Guide - Speech to text  (0) 2023.03.05
Guide - Chat completion (ChatGPT API)  (0) 2023.03.05
Guides - Production Best Practices  (0) 2023.01.10
Guides - Moderation  (0) 2023.01.10
Guides - Embeddings  (0) 2023.01.10
Guides - Fine tuning  (0) 2023.01.10
Guide - Image generation  (0) 2023.01.09
Guide - Code completion  (0) 2023.01.09