반응형
블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

카테고리


반응형

https://openai.com/blog/new-models-and-developer-products-announced-at-devday

 

New models and developer products announced at DevDay

GPT-4 Turbo with 128K context and lower prices, the new Assistants API, GPT-4 Turbo with Vision, DALL·E 3 API, and more.

openai.com

 

 

New models and developer products announced at DevDay

 

GPT-4 Turbo with 128K context and lower prices, the new Assistants API, GPT-4 Turbo with Vision, DALL·E 3 API, and more.

 

 

 

November 6, 2023

 

 

Today, we shared dozens of new additions and improvements, and reduced pricing across many parts of our platform. These include:

 

오늘 우리는 플랫폼의 여러 부분에 걸쳐 수십 가지의 새로운 추가 및 개선 사항과 가격 인하를 공유했습니다. 여기에는 다음이 포함됩니다.

 

  • New GPT-4 Turbo model that is more capable, cheaper and supports a 128K context window
  • 더 유능하고 저렴하며 128K 컨텍스트 창을 지원하는 새로운 GPT-4 Turbo 모델
  • New Assistants API that makes it easier for developers to build their own assistive AI apps that have goals and can call models and tools
  • 개발자가 목표가 있고 모델과 도구를 호출할 수 있는 자체 보조 AI 앱을 더 쉽게 구축할 수 있게 해주는 새로운 Assistant API
  • New multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS)
  • 비전, 이미지 생성(DALL·E 3) 및 TTS(텍스트 음성 변환)를 포함한 플랫폼의 새로운 다중 모드 기능

We’ll begin rolling out new features to OpenAI customers starting at 1pm PT today.

 

오늘 오후 1시(태평양 표준시)부터 OpenAI 고객에게 새로운 기능을 선보일 예정입니다.

 

Learn more about OpenAI DevDay announcements for ChatGPT.

 

ChatGPT에 대한 OpenAI DevDay 공지사항에 대해 자세히 알아보세요.

 

GPT-4 Turbo with 128K context

We released the first version of GPT-4 in March and made GPT-4 generally available to all developers in July. Today we’re launching a preview of the next generation of this model, GPT-4 Turbo

 

우리는 3월에 GPT-4의 첫 번째 버전을 출시했으며 7월에 모든 개발자가 GPT-4를 일반적으로 사용할 수 있도록 했습니다. 오늘 우리는 이 모델의 차세대 모델인 GPT-4 Turbo의 미리보기를 출시합니다.

 

GPT-4 Turbo is more capable and has knowledge of world events up to April 2023. It has a 128k context window so it can fit the equivalent of more than 300 pages of text in a single prompt. We also optimized its performance so we are able to offer GPT-4 Turbo at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4.

 

GPT-4 Turbo는 더 많은 능력을 갖추고 2023년 4월까지의 세계 사건에 대한 지식을 보유하고 있습니다. 128k 컨텍스트 창이 있으므로 단일 프롬프트에 300페이지 이상의 텍스트에 해당하는 내용을 넣을 수 있습니다. 또한 GPT-4에 비해 입력 토큰의 경우 3배, 출력 토큰의 경우 2배 저렴한 가격으로 GPT-4 Turbo를 제공할 수 있도록 성능을 최적화했습니다.

 

GPT-4 Turbo is available for all paying developers to try by passing gpt-4-1106-preview in the API and we plan to release the stable production-ready model in the coming weeks.

 

GPT-4 Turbo는 모든 유료 개발자가 API에서 gpt-4-1106-preview를 전달하여 사용해 볼 수 있으며 앞으로 몇 주 안에 안정적인 프로덕션 준비 모델을 출시할 계획입니다.

 

Function calling updates

Function calling lets you describe functions of your app or external APIs to models, and have the model intelligently choose to output a JSON object containing arguments to call those functions. We’re releasing several improvements today, including the ability to call multiple functions in a single message: users can send one message requesting multiple actions, such as “open the car window and turn off the A/C”, which would previously require multiple roundtrips with the model (learn more). We are also improving function calling accuracy: GPT-4 Turbo is more likely to return the right function parameters.

 

함수 호출을 사용하면 앱 또는 외부 API의 기능을 모델에 설명하고 모델이 이러한 함수를 호출하기 위한 인수가 포함된 JSON 개체를 출력하도록 지능적으로 선택하도록 할 수 있습니다. 우리는 오늘 단일 메시지로 여러 기능을 호출하는 기능을 포함하여 몇 가지 개선 사항을 출시합니다. 사용자는 이전에는 여러 번 필요했던 "차 창문을 열고 에어컨을 끄세요"와 같은 여러 작업을 요청하는 하나의 메시지를 보낼 수 있습니다. 모델과의 왕복 여행(자세히 알아보기) 또한 함수 호출 정확도도 향상되었습니다. GPT-4 Turbo는 올바른 함수 매개변수를 반환할 가능성이 더 높습니다.

 

Improved instruction following and JSON mode

GPT-4 Turbo performs better than our previous models on tasks that require the careful following of instructions, such as generating specific formats (e.g., “always respond in XML”). It also supports our new JSON mode, which ensures the model will respond with valid JSON. The new API parameter response_format enables the model to constrain its output to generate a syntactically correct JSON object. JSON mode is useful for developers generating JSON in the Chat Completions API outside of function calling.

 

GPT-4 Turbo는 특정 형식 생성(예: "항상 XML로 응답")과 같이 지침을 주의 깊게 따라야 하는 작업에서 이전 모델보다 더 나은 성능을 발휘합니다. 또한 모델이 유효한 JSON으로 응답하도록 보장하는 새로운 JSON 모드도 지원합니다. 새로운 API 매개변수인 response_format을 사용하면 모델이 구문적으로 올바른 JSON 개체를 생성하도록 출력을 제한할 수 있습니다. JSON 모드는 함수 호출 외부에서 Chat Completions API에서 JSON을 생성하는 개발자에게 유용합니다.

 

Reproducible outputs and log probabilities

The new seed parameter enables reproducible outputs by making the model return consistent completions most of the time. This beta feature is useful for use cases such as replaying requests for debugging, writing more comprehensive unit tests, and generally having a higher degree of control over the model behavior. We at OpenAI have been using this feature internally for our own unit tests and have found it invaluable. We’re excited to see how developers will use it. Learn more.

 

새로운 시드 매개변수는 모델이 대부분의 경우 일관된 완료를 반환하도록 하여 재현 가능한 출력을 가능하게 합니다. 이 베타 기능은 디버깅 요청 재생, 보다 포괄적인 단위 테스트 작성, 일반적으로 모델 동작에 대한 더 높은 수준의 제어와 같은 사용 사례에 유용합니다. OpenAI에서는 자체 단위 테스트를 위해 이 기능을 내부적으로 사용해 왔으며 이 기능이 매우 중요하다는 것을 알았습니다. 개발자들이 이를 어떻게 사용할지 기대됩니다. 더 알아보기.

 

We’re also launching a feature to return the log probabilities for the most likely output tokens generated by GPT-4 Turbo and GPT-3.5 Turbo in the next few weeks, which will be useful for building features such as autocomplete in a search experience.

 

또한 앞으로 몇 주 안에 GPT-4 Turbo 및 GPT-3.5 Turbo에서 생성된 가장 가능성이 높은 출력 토큰에 대한 로그 확률을 반환하는 기능을 출시할 예정입니다. 이는 검색 환경에서 자동 완성과 같은 기능을 구축하는 데 유용할 것입니다.

 

Updated GPT-3.5 Turbo

In addition to GPT-4 Turbo, we are also releasing a new version of GPT-3.5 Turbo that supports a 16K context window by default. The new 3.5 Turbo supports improved instruction following, JSON mode, and parallel function calling. For instance, our internal evals show a 38% improvement on format following tasks such as generating JSON, XML and YAML. Developers can access this new model by calling gpt-3.5-turbo-1106 in the API. Applications using the gpt-3.5-turbo name will automatically be upgraded to the new model on December 11. Older models will continue to be accessible by passing gpt-3.5-turbo-0613 in the API until June 13, 2024. Learn more.

 

GPT-4 Turbo 외에도 기본적으로 16K 컨텍스트 창을 지원하는 GPT-3.5 Turbo의 새 버전도 출시하고 있습니다. 새로운 3.5 Turbo는 향상된 명령 따르기, JSON 모드 및 병렬 함수 호출을 지원합니다. 예를 들어 내부 평가에서는 JSON, XML, YAML 생성과 같은 작업에 따른 형식이 38% 개선된 것으로 나타났습니다. 개발자는 API에서 gpt-3.5-turbo-1106을 호출하여 이 새로운 모델에 액세스할 수 있습니다. gpt-3.5-turbo 이름을 사용하는 애플리케이션은 12월 11일에 새 모델로 자동 업그레이드됩니다. 이전 모델은 2024년 6월 13일까지 API에서 gpt-3.5-turbo-0613을 전달하여 계속 액세스할 수 있습니다. 자세히 알아보기

 

Assistants API, Retrieval, and Code Interpreter

Today, we’re releasing the Assistants API, our first step towards helping developers build agent-like experiences within their own applications. An assistant is a purpose-built AI that has specific instructions, leverages extra knowledge, and can call models and tools to perform tasks. The new Assistants API provides new capabilities such as Code Interpreter and Retrieval as well as function calling to handle a lot of the heavy lifting that you previously had to do yourself and enable you to build high-quality AI apps.

 

오늘 우리는 개발자가 자신의 애플리케이션 내에서 에이전트와 같은 경험을 구축할 수 있도록 돕기 위한 첫 번째 단계인 Assistants API를 출시합니다. 어시스턴트는 특정 지침이 있고, 추가 지식을 활용하며, 작업을 수행하기 위해 모델과 도구를 호출할 수 있는 특수 목적의 AI입니다. 새로운 Assistants API는 코드 해석기 및 검색과 같은 새로운 기능과 함수 호출을 제공하여 이전에 직접 수행해야 했던 많은 무거운 작업을 처리하고 고품질 AI 앱을 구축할 수 있도록 해줍니다.

 

This API is designed for flexibility; use cases range from a natural language-based data analysis app, a coding assistant, an AI-powered vacation planner, a voice-controlled DJ, a smart visual canvas—the list goes on. The Assistants API is built on the same capabilities that enable our new GPTs product: custom instructions and tools such as Code interpreter, Retrieval, and function calling.

 

이 API는 유연성을 위해 설계되었습니다. 사용 사례는 자연어 기반 데이터 분석 앱, 코딩 도우미, AI 기반 휴가 플래너, 음성 제어 DJ, 스마트 시각적 캔버스에 이르기까지 다양합니다. Assistants API는 새로운 GPT 제품을 활성화하는 것과 동일한 기능, 즉 코드 해석기, 검색, 함수 호출과 같은 맞춤 지침 및 도구를 기반으로 구축되었습니다.

 

A key change introduced by this API is persistent and infinitely long threads, which allow developers to hand off thread state management to OpenAI and work around context window constraints. With the Assistants API, you simply add each new message to an existing thread.

 

이 API에 의해 도입된 주요 변경 사항은 개발자가 스레드 상태 관리를 OpenAI에 넘겨주고 컨텍스트 창 제약 조건을 해결할 수 있도록 하는 지속적이고 무한히 긴 스레드입니다. Assistants API를 사용하면 각각의 새 메시지를 기존 스레드에 추가하기만 하면 됩니다.

 

Assistants also have access to call new tools as needed, including:

 

보조자는 필요에 따라 다음을 포함한 새로운 도구를 호출할 수도 있습니다.

 

  • Code Interpreter: writes and runs Python code in a sandboxed execution environment, and can generate graphs and charts, and process files with diverse data and formatting. It allows your assistants to run code iteratively to solve challenging code and math problems, and more.
  • 코드 해석기: 샌드박스 실행 환경에서 Python 코드를 작성 및 실행하고, 그래프와 차트를 생성하고, 다양한 데이터와 형식이 포함된 파일을 처리할 수 있습니다. 이를 통해 어시스턴트는 코드를 반복적으로 실행하여 까다로운 코드 및 수학 문제 등을 해결할 수 있습니다.
  • Retrieval: augments the assistant with knowledge from outside our models, such as proprietary domain data, product information or documents provided by your users. This means you don’t need to compute and store embeddings for your documents, or implement chunking and search algorithms. The Assistants API optimizes what retrieval technique to use based on our experience building knowledge retrieval in ChatGPT.
  • 검색: 독점 도메인 데이터, 제품 정보 또는 사용자가 제공한 문서와 같은 모델 외부의 지식으로 어시스턴트를 강화합니다. 즉, 문서에 대한 임베딩을 계산하고 저장할 필요가 없으며 청크 분할 및 검색 알고리즘을 구현할 필요가 없습니다. Assistants API는 ChatGPT에서 지식 검색을 구축한 경험을 바탕으로 사용할 검색 기술을 최적화합니다.
  • Function calling: enables assistants to invoke functions you define and incorporate the function response in their messages.
  • 함수 호출: 어시스턴트가 사용자가 정의한 함수를 호출하고 메시지에 함수 응답을 통합할 수 있습니다.

As with the rest of the platform, data and files passed to the OpenAI API are never used to train our models and developers can delete the data when they see fit.

 

플랫폼의 나머지 부분과 마찬가지로 OpenAI API에 전달된 데이터와 파일은 모델을 교육하는 데 사용되지 않으며 개발자는 적절하다고 판단되는 경우 데이터를 삭제할 수 있습니다.

 

You can try the Assistants API beta without writing any code by heading to the Assistants playground.

 

Assistants 놀이터로 이동하면 코드를 작성하지 않고도 Assistants API 베타를 사용해 볼 수 있습니다.

 

 

 

assistants-playground.mp4
4.38MB

 

The Assistants API is in beta and available to all developers starting today. Please share what you build with us (@OpenAI) along with your feedback which we will incorporate as we continue building over the coming weeks. Pricing for the Assistants APIs and its tools is available on our pricing page.

 

Assistants API는 베타 버전이며 오늘부터 모든 개발자가 사용할 수 있습니다. 앞으로 몇 주 동안 계속해서 구축하면서 반영할 피드백과 함께 여러분이 구축한 내용을 우리(@OpenAI)와 공유해 주세요. Assistants API 및 해당 도구의 가격은 가격 페이지에서 확인할 수 있습니다.

 

New modalities in the API

GPT-4 Turbo with vision

GPT-4 Turbo can accept images as inputs in the Chat Completions API, enabling use cases such as generating captions, analyzing real world images in detail, and reading documents with figures. For example, BeMyEyes uses this technology to help people who are blind or have low vision with daily tasks like identifying a product or navigating a store. Developers can access this feature by using gpt-4-vision-preview in the API. We plan to roll out vision support to the main GPT-4 Turbo model as part of its stable release. Pricing depends on the input image size. For instance, passing an image with 1080×1080 pixels to GPT-4 Turbo costs $0.00765. Check out our vision guide.

 

GPT-4 Turbo는 이미지를 Chat Completions API의 입력으로 받아들여 캡션 생성, 실제 이미지 세부 분석, 그림이 포함된 문서 읽기 등의 사용 사례를 지원합니다. 예를 들어, BeMyEyes는 이 기술을 사용하여 시각 장애가 있거나 시력이 낮은 사람들이 제품 식별이나 매장 탐색과 같은 일상 업무를 수행할 수 있도록 돕습니다. 개발자는 API에서 gpt-4-vision-preview를 사용하여 이 기능에 액세스할 수 있습니다. 우리는 안정적인 릴리스의 일부로 주요 GPT-4 Turbo 모델에 비전 지원을 출시할 계획입니다. 가격은 입력 이미지 크기에 따라 다릅니다. 예를 들어 1080×1080 픽셀의 이미지를 GPT-4 Turbo로 전달하는 데 드는 비용은 $0.00765입니다. 비전 가이드를 확인해 보세요.

 

DALL·E 3

Developers can integrate DALL·E 3, which we recently launched to ChatGPT Plus and Enterprise users, directly into their apps and products through our Images API by specifying dall-e-3 as the model. Companies like Snap, Coca-Cola, and Shutterstock have used DALL·E 3 to programmatically generate images and designs for their customers and campaigns. Similar to the previous version of DALL·E, the API incorporates built-in moderation to help developers protect their applications against misuse. We offer different format and quality options, with prices starting at $0.04 per image generated. Check out our guide to getting started with DALL·E 3 in the API.

 

개발자는 dall-e-3를 모델로 지정하여 Images API를 통해 최근 ChatGPT Plus 및 Enterprise 사용자에게 출시된 DALL·E 3를 앱과 제품에 직접 통합할 수 있습니다. Snap, Coca-Cola, Shutterstock과 같은 회사에서는 DALL·E 3를 사용하여 고객과 캠페인을 위한 이미지와 디자인을 프로그래밍 방식으로 생성했습니다. 이전 버전의 DALL·E와 마찬가지로 API에는 개발자가 응용 프로그램을 오용으로부터 보호할 수 있도록 조정 기능이 내장되어 있습니다. 우리는 생성된 이미지당 $0.04부터 시작하는 가격으로 다양한 형식과 품질 옵션을 제공합니다. API에서 DALL·E 3을 시작하는 방법에 대한 가이드를 확인하세요.

 

 

Text-to-speech (TTS)

Developers can now generate human-quality speech from text via the text-to-speech API. Our new TTS model offers six preset voices to choose from and two model variants, tts-1 and tts-1-hd. tts is optimized for real-time use cases and tts-1-hd is optimized for quality. Pricing starts at $0.015 per input 1,000 characters. Check out our TTS guide to get started.

 

이제 개발자는 텍스트 음성 변환 API를 통해 텍스트에서 인간 수준의 음성을 생성할 수 있습니다. 새로운 TTS 모델은 선택할 수 있는 6개의 사전 설정 음성과 2개의 모델 변형인 tts-1 및 tts-1-hd를 제공합니다. tts는 실시간 사용 사례에 최적화되어 있고 tts-1-hd는 품질에 최적화되어 있습니다. 가격은 입력 1,000자당 $0.015부터 시작됩니다. 시작하려면 TTS 가이드를 확인하세요.

 

Listen to voice samples

Select text  Scenic  Directions  Technical  Recipe

As the golden sun dips below the horizon, casting long shadows across the tranquil meadow, the world seems to hush, and a sense of calmness envelops the Earth, promising a peaceful night’s rest for all living beings.

 

 

 

scenic-alloy.mp3
0.05MB

 

 

 

Model customization

GPT-4 fine tuning experimental access

We’re creating an experimental access program for GPT-4 fine-tuning. Preliminary results indicate that GPT-4 fine-tuning requires more work to achieve meaningful improvements over the base model compared to the substantial gains realized with GPT-3.5 fine-tuning. As quality and safety for GPT-4 fine-tuning improves, developers actively using GPT-3.5 fine-tuning will be presented with an option to apply to the GPT-4 program within their fine-tuning console.

 

우리는 GPT-4 미세 조정을 위한 실험적인 액세스 프로그램을 만들고 있습니다. 예비 결과에 따르면 GPT-4 미세 조정은 GPT-3.5 미세 조정을 통해 실현된 상당한 이득에 비해 기본 모델에 비해 의미 있는 개선을 달성하기 위해 더 많은 작업이 필요합니다. GPT-4 미세 조정의 품질과 안전성이 향상됨에 따라 GPT-3.5 미세 조정을 적극적으로 사용하는 개발자에게는 미세 조정 콘솔 내에서 GPT-4 프로그램에 적용할 수 있는 옵션이 제공됩니다.

 

Custom models

For organizations that need even more customization than fine-tuning can provide (particularly applicable to domains with extremely large proprietary datasets—billions of tokens at minimum), we’re also launching a Custom Models program, giving selected organizations an opportunity to work with a dedicated group of OpenAI researchers to train custom GPT-4 to their specific domain. This includes modifying every step of the model training process, from doing additional domain specific pre-training, to running a custom RL post-training process tailored for the specific domain. Organizations will have exclusive access to their custom models. In keeping with our existing enterprise privacy policies, custom models will not be served to or shared with other customers or used to train other models. Also, proprietary data provided to OpenAI to train custom models will not be reused in any other context. This will be a very limited (and expensive) program to start—interested orgs can apply here.

 

미세 조정이 제공할 수 있는 것보다 훨씬 더 많은 사용자 정의가 필요한 조직을 위해(특히 극도로 큰 독점 데이터 세트(최소 수십억 개의 토큰)가 있는 도메인에 적용 가능)를 위해 우리는 또한 사용자 정의 모델 프로그램을 출시하여 일부 조직에 특정 도메인에 맞게 맞춤형 GPT-4를 교육하는 OpenAI 연구원 전용 그룹입니다. 여기에는 추가 도메인별 사전 교육 수행부터 특정 도메인에 맞춰진 사용자 정의 RL 사후 교육 프로세스 실행까지 모델 교육 프로세스의 모든 단계를 수정하는 것이 포함됩니다. 조직은 맞춤형 모델에 독점적으로 액세스할 수 있습니다. 기존 기업 개인 정보 보호 정책에 따라 사용자 지정 모델은 다른 고객에게 제공 또는 공유되지 않으며 다른 모델을 교육하는 데 사용되지 않습니다. 또한 맞춤형 모델을 훈련하기 위해 OpenAI에 제공된 독점 데이터는 다른 어떤 맥락에서도 재사용되지 않습니다. 이는 시작하기에 매우 제한적이고 비용이 많이 드는 프로그램입니다. 관심 있는 조직은 여기에서 신청할 수 있습니다.

 

Lower prices and higher rate limits

Lower prices

We’re decreasing several prices across the platform to pass on savings to developers (all prices below are expressed per 1,000 tokens):

 

우리는 개발자에게 절감액을 전달하기 위해 플랫폼 전반에 걸쳐 여러 가지 가격을 인하하고 있습니다(아래의 모든 가격은 1,000개 토큰당 표시됩니다).

 

  • GPT-4 Turbo input tokens are 3x cheaper than GPT-4 at $0.01 and output tokens are 2x cheaper at $0.03.
  • GPT-4 Turbo 입력 토큰은 $0.01로 GPT-4보다 3배 저렴하고, 출력 토큰은 $0.03으로 2배 저렴합니다.
  • GPT-3.5 Turbo input tokens are 3x cheaper than the previous 16K model at $0.001 and output tokens are 2x cheaper at $0.002. Developers previously using GPT-3.5 Turbo 4K benefit from a 33% reduction on input tokens at $0.001. Those lower prices only apply to the new GPT-3.5 Turbo introduced today.
  • GPT-3.5 Turbo 입력 토큰은 $0.001로 이전 16K 모델보다 3배 저렴하고, 출력 토큰은 $0.002로 2배 저렴합니다. 이전에 GPT-3.5 Turbo 4K를 사용하는 개발자는 $0.001로 입력 토큰이 33% 감소되는 이점을 누릴 수 있습니다. 이러한 저렴한 가격은 오늘 출시된 새로운 GPT-3.5 Turbo에만 적용됩니다.
  • Fine-tuned GPT-3.5 Turbo 4K model input tokens are reduced by 4x at $0.003 and output tokens are 2.7x cheaper at $0.006. Fine-tuning also supports 16K context at the same price as 4K with the new GPT-3.5 Turbo model. These new prices also apply to fine-tuned gpt-3.5-turbo-0613 models.
  • 미세 조정된 GPT-3.5 Turbo 4K 모델 입력 토큰은 $0.003로 4배 감소하고 출력 토큰은 $0.006으로 2.7배 저렴합니다. 미세 조정은 새로운 GPT-3.5 Turbo 모델을 통해 4K와 동일한 가격으로 16K 컨텍스트도 지원합니다. 이 새로운 가격은 미세 조정된 gpt-3.5-turbo-0613 모델에도 적용됩니다.

 

 

Higher rate limits

To help you scale your applications, we’re doubling the tokens per minute limit for all our paying GPT-4 customers. You can view your new rate limits in your rate limit page. We’ve also published our usage tiers that determine automatic rate limits increases, so you know what to expect in how your usage limits will automatically scale. You can now request increases to usage limits from your account settings.

 

애플리케이션 확장을 돕기 위해 모든 유료 GPT-4 고객의 분당 토큰 한도를 두 배로 늘립니다. 비율 제한 페이지에서 새로운 비율 제한을 볼 수 있습니다. 또한 자동 요금 한도 증가를 결정하는 사용량 계층을 게시했으므로 사용량 한도가 자동으로 확장되는 방식에 대해 예상할 수 있습니다. 이제 계정 설정에서 사용 한도 증가를 요청할 수 있습니다.

 

 

OpenAI is committed to protecting our customers with built-in copyright safeguards in our systems. Today, we’re going one step further and introducing Copyright Shield—we will now step in and defend our customers, and pay the costs incurred, if you face legal claims around copyright infringement. This applies to generally available features of ChatGPT Enterprise and our developer platform.

 

OpenAI는 시스템에 내장된 저작권 보호 장치를 통해 고객을 보호하기 위해 최선을 다하고 있습니다. 오늘 우리는 한 단계 더 나아가 저작권 보호(Copyright Shield)를 도입합니다. 이제 저작권 침해에 대한 법적 소송이 제기될 경우 우리가 개입하여 고객을 보호하고 발생한 비용을 지불할 것입니다. 이는 ChatGPT Enterprise 및 개발자 플랫폼의 일반적으로 사용 가능한 기능에 적용됩니다.

 

Whisper v3 and Consistency Decoder

We are releasing Whisper large-v3, the next version of our open source automatic speech recognition model (ASR) which features improved performance across languages. We also plan to support Whisper v3 in our API in the near future.

 

우리는 언어 전반에 걸쳐 향상된 성능을 제공하는 오픈 소스 자동 음성 인식 모델(ASR)의 다음 버전인 Whisper Large-v3를 출시합니다. 또한 가까운 시일 내에 API에서 Whisper v3를 지원할 계획입니다.

 

We are also open sourcing the Consistency Decoder, a drop in replacement for the Stable Diffusion VAE decoder. This decoder improves all images compatible with the by Stable Diffusion 1.0+ VAE, with significant improvements in text, faces and straight lines.

 

우리는 또한 Stable Diffusion VAE 디코더를 대체하는 Consistency Decoder를 오픈 소스화하고 있습니다. 이 디코더는 Stable Diffusion 1.0+ VAE와 호환되는 모든 이미지를 개선하여 텍스트, 얼굴 및 직선이 크게 향상되었습니다.

 

ChatGPT에 대한 OpenAI DevDay 공지사항에 대해 자세히 알아보세요.

 

 

 

 

 

 

 

반응형