반응형
블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

카테고리


반응형

https://openai.com/blog/new-embedding-models-and-api-updates

 

New embedding models and API updates

We are launching a new generation of embedding models, new GPT-4 Turbo and moderation models, new API usage management tools, and soon, lower pricing on GPT-3.5 Turbo.

openai.com

 

 

Open AI 에서는 2024년 1월 25일 new generation of embedding models, new GPT-4 Turbo, moderation models, new API usage management tools 를 발표 했습니다.

 

그리고 조만간 더 저렴한 GPT-3.5 Turbo 모델을 발표할 예정입니다.

 

이들 새로운 모델에는 다음과 같은 것 들이 포함 됩니다.

 

  • Two new embedding models
  • An updated GPT-4 Turbo preview model 
  • An updated GPT-3.5 Turbo model
  • An updated text moderation model

 

새로운 임베딩 모델은 더 낮은 가격으로 이용할 수 있습니다.

새로운 임베딩 모델은 아래와 같습니다.

 

* Small embedding model

 

- text-embedding-3-small : 고효율 임베딩 모델, 이전 모델인 text-embedding-ada-002 보다 업그레이드 됨

 

더욱 강력한 성능. text-embedding-ada-002와 text-embedding-3-small을 비교하면 일반적으로 사용되는 다중 언어 검색 벤치마크(MIRACL)의 평균 점수가 31.4%에서 44.0%로 증가했습니다. 영어 과제 벤치마크(MTEB)가 61.0%에서 62.3%로 증가했습니다.

 

인하 된 가격. text-embedding-3-small은 이전 세대 text-embedding-ada-002 모델보다 훨씬 더 효율적입니다. text-embedding-3-small의 가격은 text-embedding-ada-002에 비해 1,000개 토큰당 가격이 $0.0001에서 $0.00002로 5배 인하되었습니다.

 

* Large text embedding model

 

- text-embedding-3-large : 최고 성능 모델. 

text-embedding-3-large는 새로운 차세대 대형 임베딩 모델이며 최대 3072차원의 임베딩을 생성합니다.

 

더욱 강력한 성능. text-embedding-3-large는 새로운 최고 성능 모델입니다. text-embedding-ada-002와 text-embedding-3-large 비교: MIRACL에서는 평균 점수가 31.4%에서 54.9%로 증가한 반면, MTEB에서는 평균 점수가 61.0%에서 64.6%로 증가했습니다.

 

 

text-embedding-3-large의 가격은 $0.00013/1,000토큰입니다.
Embeddings guide 에서 새로운 임베딩 모델 사용에 대해 자세히 알아볼 수 있습니다.

 

Native support for shortening embeddings

Using larger embeddings, for example storing them in a vector store for retrieval, generally costs more and consumes more compute, memory and storage than using smaller embeddings.

 

예를 들어 검색을 위해 벡터 저장소에 저장하는 등 더 큰 임베딩을 사용하면 더 작은 임베딩을 사용하는 것보다 일반적으로 더 많은 비용이 들고 더 많은 컴퓨팅, 메모리 및 스토리지를 사용합니다.

 

Both of our new embeddings models were trained with a technique that allows developers to trade-off performance and cost of using embeddings. Specifically, developers can shorten embeddings (i.e. remove some numbers from the end of the sequence) without the embedding losing its concept-representing properties by passing in the dimensions API parameter. For example, on the MTEB benchmark, a text-embedding-3-large embedding can be shortened to a size of 256 while still outperforming an unshortened text-embedding-ada-002 embedding with a size of 1536.

 

두 가지 새로운 임베딩 모델은 모두 개발자가 임베딩 사용 비용과 성능을 절충할 수 있는 기술로 교육되었습니다. 특히 개발자는 차원 API 매개변수를 전달하여 임베딩이 개념을 나타내는 속성을 잃지 않고 임베딩을 단축할 수 있습니다(예: 시퀀스 끝에서 일부 숫자 제거). 예를 들어, MTEB 벤치마크에서 text-embedding-3-large 임베딩은 크기 256으로 단축되면서도 크기 1536의 단축되지 않은 text-embedding-ada-002 임베딩보다 성능이 뛰어납니다.

 

 

 

This enables very flexible usage. For example, when using a vector data store that only supports embeddings up to 1024 dimensions long, developers can now still use our best embeddings model text-embedding-3-large and specify a value of 1024 for the dimensions API parameter, which will shorten the embedding down from 3072 dimensions, trading off some accuracy in exchange for the smaller vector size.

 

이를 통해 매우 유연한 사용이 가능합니다. 예를 들어, 최대 1024차원 길이의 임베딩만 지원하는 벡터 데이터 저장소를 사용할 때 개발자는 이제 최고의 임베딩 모델 text-embedding-3-large를 사용하고 차원 API 매개변수에 값 1024를 지정할 수 있습니다. 3072 차원에서 임베딩을 줄여 벡터 크기가 더 작아지는 대가로 어느 정도 정확도를 희생했습니다.

 

Other new models and lower pricing

Updated GPT-3.5 Turbo model and lower pricing

 

Next week we are introducing a new GPT-3.5 Turbo model, gpt-3.5-turbo-0125, and for the third time in the past year, we will be decreasing prices on GPT-3.5 Turbo to help our customers scale. Input prices for the new model are reduced by 50% to $0.0005 /1K tokens and output prices are reduced by 25% to $0.0015 /1K tokens. This model will also have various improvements including higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls.

 

다음 주에 우리는 새로운 GPT-3.5 Turbo 모델인 gpt-3.5-turbo-0125를 선보일 예정이며, 고객의 확장을 돕기 위해 작년에 세 번째로 GPT-3.5 Turbo의 가격을 인하할 예정입니다. 새 모델의 입력 가격은 $0.0005/1K 토큰으로 50% 인하되고, 출력 가격은 $0.0015/1K 토큰으로 25% 인하됩니다. 또한 이 모델에는 요청된 형식으로 응답할 때 더 높은 정확도와 영어가 아닌 언어 함수 호출에 대한 텍스트 인코딩 문제를 일으키는 버그 수정 등 다양한 개선 사항이 포함됩니다.

 

Customers using the pinned gpt-3.5-turbo model alias will be automatically upgraded from gpt-3.5-turbo-0613 to gpt-3.5-turbo-0125 two weeks after this model launches.

 

고정된 gpt-3.5-turbo 모델 별칭을 사용하는 고객은 이 모델 출시 2주 후 gpt-3.5-turbo-0613에서 gpt-3.5-turbo-0125로 자동 업그레이드됩니다.

 

 

Updated GPT-4 Turbo preview

Over 70% of requests from GPT-4 API customers have transitioned to GPT-4 Turbo since its release, as developers take advantage of its updated knowledge cutoff, larger 128k context windows, and lower prices. 

 

개발자가 업데이트된 지식 컷오프, 더 커진 128k 컨텍스트 창 및 저렴한 가격을 활용함에 따라 GPT-4 API 고객의 요청 중 70% 이상이 출시 이후 GPT-4 Turbo로 전환되었습니다.

 

Today, we are releasing an updated GPT-4 Turbo preview model, gpt-4-0125-preview. This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of “laziness” where the model doesn’t complete a task. The new model also includes the fix for the bug impacting non-English UTF-8 generations.

 

오늘 우리는 업데이트된 GPT-4 Turbo 미리보기 모델인 gpt-4-0125-preview를 출시합니다. 이 모델은 이전 미리 보기 모델보다 코드 생성과 같은 작업을 더 철저하게 완료하며 모델이 작업을 완료하지 못하는 "게으름"의 경우를 줄이기 위한 것입니다. 새 모델에는 영어가 아닌 UTF-8 세대에 영향을 미치는 버그에 대한 수정 사항도 포함되어 있습니다.

 

For those who want to be automatically upgraded to new GPT-4 Turbo preview versions, we are also introducing a new gpt-4-turbo-preview model name alias, which will always point to our latest GPT-4 Turbo preview model. 

 

새로운 GPT-4 Turbo 미리보기 버전으로 자동 업그레이드하려는 사람들을 위해 항상 최신 GPT-4 Turbo 미리보기 모델을 가리키는 새로운 gpt-4-turbo-preview 모델 이름 별칭도 도입합니다.

 

We plan to launch GPT-4 Turbo with vision in general availability in the coming months.

 

우리는 앞으로 몇 달 안에 비전이 포함된 GPT-4 Turbo를 일반 출시할 계획입니다.

 

 

Updated moderation model

 

The free Moderation API allows developers to identify potentially harmful text. As part of our ongoing safety work, we are releasing text-moderation-007, our most robust moderation model to-date.

 

무료 Moderation API를 통해 개발자는 잠재적으로 유해한 텍스트를 식별할 수 있습니다. 지속적인 안전 작업의 일환으로 현재까지 가장 강력한 조정 모델인 text-moderation-007을 출시합니다.

 

The text-moderation-latest and text-moderation-stable aliases have been updated to point to it. You can learn more about building safe AI systems through our safety best practices guide.

 

text-moderation-latest 및 text-moderation-stable 별칭이 이를 가리키도록 업데이트되었습니다. 안전 모범 사례 가이드를 통해 안전한 AI 시스템 구축에 대해 자세히 알아볼 수 있습니다.

 

New ways to understand API usage and manage API keys

 

We are launching two platform improvements to give developers both more visibility into their usage and control over API keys.

 

우리는 개발자에게 API 키 사용에 대한 더 많은 가시성과 제어권을 제공하기 위해 두 가지 플랫폼 개선 사항을 출시할 예정입니다.

 

First, developers can now assign permissions to API keys from the API keys page. For example, a key could be assigned read-only access to power an internal tracking dashboard, or restricted to only access certain endpoints.

 

첫째, 이제 개발자는 API 키 페이지에서 API 키에 권한을 할당할 수 있습니다. 예를 들어 내부 추적 대시보드를 구동하기 위해 키에 읽기 전용 액세스 권한을 할당하거나 특정 엔드포인트에만 액세스하도록 제한할 수 있습니다.

 

Second, the usage dashboard and usage export function now expose metrics on an API key level after turning on tracking. This makes it simple to view usage on a per feature, team, product, or project level, simply by having separate API keys for each.

 

둘째, 이제 추적을 활성화한 후 사용량 대시보드 및 사용량 내보내기 기능이 API 키 수준에 지표를 노출합니다. 이를 통해 각각에 대해 별도의 API 키를 보유함으로써 기능, 팀, 제품 또는 프로젝트 수준별 사용량을 간단하게 확인할 수 있습니다.

 

 

In the coming months, we plan to further improve the ability for developers to view their API usage and manage API keys, especially in larger organizations.

 

앞으로 몇 달 안에 우리는 특히 대규모 조직에서 개발자가 자신의 API 사용량을 확인하고 API 키를 관리할 수 있는 기능을 더욱 향상시킬 계획입니다.

 

For the latest updates on OpenAI's APIs, follow us on X at @OpenAIDevs.

 

OpenAI API에 대한 최신 업데이트를 보려면 @OpenAIDevs에서 X를 팔로우하세요.

반응형


반응형

https://openai.com/blog/how-openai-is-approaching-2024-worldwide-elections

 

How OpenAI is approaching 2024 worldwide elections

We’re working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information.

openai.com

 

How OpenAI is approaching 2024 worldwide elections

We’re working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information.

 

 

 

Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process. 

 

선거의 무결성을 보호하려면 민주적 절차의 모든 부분에서 협력이 필요하며, 우리는 우리의 기술이 이 절차를 훼손할 수 있는 방식으로 사용되지 않도록 하고 싶습니다.

 

Our tools empower people to improve their daily lives and solve complex problems  – from using AI to enhance state services to simplifying medical forms for patients.

 

우리의 도구는 AI를 사용하여 국가 서비스를 강화하는 것부터 환자를 위한 의료 양식을 단순화하는 것까지 사람들이 일상 생활을 개선하고 복잡한 문제를 해결할 수 있도록 지원합니다.

 

We want to make sure that our AI systems are built, deployed, and used safely. Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.

 

우리는 AI 시스템이 안전하게 구축, 배포 및 사용되기를 원합니다. 다른 새로운 기술과 마찬가지로 이러한 도구에도 이점과 과제가 있습니다. 이는 또한 전례 없는 일이며, 도구 사용 방법에 대해 더 많이 배우면서 접근 방식을 계속 발전시킬 것입니다.

 

As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency. We have a cross-functional effort dedicated to election work, bringing together expertise from our safety systems, threat intelligence, legal, engineering, and policy teams to quickly investigate and address potential abuse. 

 

우리는 세계 최대 민주주의 국가의 2024년 선거를 준비하면서 정확한 투표 정보를 높이고, 신중한 정책을 시행하고, 투명성을 개선하여 플랫폼 안전 작업을 계속하는 것입니다. 우리는 안전 시스템, 위협 인텔리전스, 법률, 엔지니어링 및 정책 팀의 전문 지식을 모아 잠재적인 남용을 신속하게 조사하고 해결하기 위해 선거 업무에 전념하는 다기능적 노력을 기울이고 있습니다.

 

The following are key initiatives our teams are investing in to prepare for elections this year:

 

다음은 올해 선거를 준비하기 위해 우리 팀이 투자하고 있는 주요 이니셔티브입니다.

 

Preventing abuse

We expect and aim for people to use our tools safely and responsibly, and elections are no different. We work to anticipate and prevent relevant abuse—such as misleading “deepfakes”, scaled influence operations, or chatbots impersonating candidates. Prior to releasing new systems, we red team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm. For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests. These tools provide a strong foundation for our work around election integrity. For instance, DALL·E has guardrails to decline requests that ask for image generation of real people, including candidates.

 

우리는 사람들이 우리의 도구를 안전하고 책임감 있게 사용하기를 기대하고 목표하며, 선거도 이와 다르지 않습니다. 우리는 오해를 불러일으키는 "딥페이크", 대규모 영향력 행사 또는 후보자를 사칭하는 챗봇과 같은 관련 남용을 예측하고 예방하기 위해 노력합니다. 새로운 시스템을 출시하기 전에 우리는 레드팀을 구성하고 피드백을 받기 위해 사용자와 외부 파트너를 참여시키며 피해 가능성을 줄이기 위한 안전 완화 조치를 구축합니다. 수년 동안 우리는 사실의 정확성을 높이고, 편견을 줄이고, 특정 요청을 거부하는 도구를 반복적으로 사용해 왔습니다. 이러한 도구는 선거 무결성에 관한 작업을 위한 강력한 기반을 제공합니다. 예를 들어 DALL·E에는 후보자를 포함한 실제 인물의 이미지 생성을 요청하는 요청을 거부하는 가드레일이 있습니다.

 

We regularly refine our Usage Policies for ChatGPT and the API as we learn more about how people use or attempt to abuse our technology. A few to highlight for elections: 

 

우리는 사람들이 우리 기술을 어떻게 사용하거나 남용하려고 시도하는지 자세히 파악하면서 ChatGPT 및 API에 대한 사용 정책을 정기적으로 개선합니다. 선거를 위해 강조할 몇 가지:

 

  • We’re still working to understand how effective our tools might be for personalized persuasion. Until we know more, we don’t allow people to build applications for political campaigning and lobbying. 

  • 우리는 우리의 도구가 개인화된 설득에 얼마나 효과적인지 이해하기 위해 계속 노력하고 있습니다. 더 많은 내용을 알기 전까지는 사람들이 정치 캠페인 및 로비 활동을 위한 애플리케이션을 구축하는 것을 허용하지 않습니다.

  • People want to know and trust that they are interacting with a real person, business, or government. For that reason, we don’t allow builders to create chatbots that pretend to be real people (e.g., candidates) or institutions (e.g., local government). 

  • 사람들은 자신이 실제 사람, 기업 또는 정부와 상호 작용하고 있다는 사실을 알고 신뢰하고 싶어합니다. 이러한 이유로 우리는 빌더가 실제 사람(예: 후보자) 또는 기관(예: 지방 정부)인 것처럼 가장하는 챗봇을 만드는 것을 허용하지 않습니다.

  • We don’t allow applications that deter people from participation in democratic processes—for example, misrepresenting voting processes and qualifications (e.g., when, where, or who is eligible to vote) or that discourage voting (e.g., claiming a vote is meaningless).

  • 사람들이 민주적 절차에 참여하는 것을 방해하는 애플리케이션(예: 투표 절차 및 자격(예: 언제, 어디서, 누가 투표할 자격이 있는지)을 허위로 표시하거나 투표를 방해하는 애플리케이션(예: 투표가 무의미하다고 주장))은 허용되지 않습니다. .

  • With our new GPTs, users can report potential violations to us.

  • 새로운 GPT를 통해 사용자는 잠재적인 위반 사항을 신고할 수 있습니다.
 

 

 

Transparency around AI-generated content

 

Better transparency around image provenance—including the ability to detect which tools were used to produce an image—can empower voters to assess an image with trust and confidence in how it was made. We’re working on several provenance efforts. Early this year, we will implement the Coalition for Content Provenance and Authenticity’s digital credentials—an approach that encodes details about the content’s provenance using cryptography—for images generated by DALL·E 3. 

 

이미지를 생성하는 데 어떤 도구가 사용되었는지 감지하는 기능을 포함하여 이미지 출처에 대한 투명성이 향상되면 유권자는 이미지 제작 방법에 대한 신뢰와 자신감을 가지고 이미지를 평가할 수 있습니다. 우리는 여러 출처에 대한 노력을 기울이고 있습니다. 올해 초 우리는 DALL·E 3에서 생성된 이미지에 대해 암호화를 사용하여 콘텐츠 출처에 대한 세부 정보를 인코딩하는 접근 방식인 콘텐츠 출처 및 진위성 연합의 디지털 자격 증명을 구현할 예정입니다.

 

We are also experimenting with a provenance classifier, a new tool for detecting images generated by DALL·E. Our internal testing has shown promising early results, even where images have been subject to common types of modifications. We plan to soon make it available to our first group of testers—including journalists, platforms, and researchers—for feedback.

 

또한 DALL·E에서 생성된 이미지를 감지하는 새로운 도구인 출처 분류기를 실험하고 있습니다. 우리의 내부 테스트는 이미지가 일반적인 유형의 수정을 받은 경우에도 유망한 초기 결과를 보여주었습니다. 우리는 곧 언론인, 플랫폼, 연구원을 포함한 첫 번째 테스터 그룹이 피드백을 받을 수 있도록 할 계획입니다.

 

Finally, ChatGPT is increasingly integrating with existing sources of information—for example, users will start to get access to real-time news reporting globally, including attribution and links. Transparency around the origin of information and balance in news sources can help voters better assess information and decide for themselves what they can trust.

 

마지막으로 ChatGPT는 점점 더 기존 정보 소스와 통합되고 있습니다. 예를 들어 사용자는 속성 및 링크를 포함하여 전 세계적으로 실시간 뉴스 보고에 액세스할 수 있게 됩니다. 정보 출처에 대한 투명성과 뉴스 소스의 균형은 유권자가 정보를 더 잘 평가하고 신뢰할 수 있는 정보를 스스로 결정하는 데 도움이 될 수 있습니다.

 

Improving access to authoritative voting information

 

In the United States, we are working with the National Association of Secretaries of State (NASS), the nation's oldest nonpartisan professional organization for public officials. ChatGPT will direct users to CanIVote.org, the authoritative website on US voting information, when asked certain procedural election related questions—for example, where to vote. Lessons from this work will inform our approach in other countries and regions. 

 

미국에서는 미국에서 가장 오래된 공직자를 위한 초당파적 전문 조직인 전국 국무장관 협회(NASS)와 협력하고 있습니다. ChatGPT는 특정 절차적 선거 관련 질문(예: 투표 장소)을 묻는 경우 미국 투표 정보에 대한 권위 있는 웹사이트인 CanIVote.org로 사용자를 안내합니다. 이 작업에서 얻은 교훈은 다른 국가 및 지역에서의 우리의 접근 방식에 영향을 미칠 것입니다.

 

We’ll have more to share in the coming months. We look forward to continuing to work with and learn from partners to anticipate and prevent potential abuse of our tools in the lead up to this year’s global elections.

 

앞으로 몇 달 동안 더 많은 내용을 공유할 예정입니다. 우리는 올해 세계 선거를 앞두고 우리 도구의 남용 가능성을 예측하고 방지하기 위해 계속해서 파트너와 협력하고 파트너로부터 배우기를 기대합니다.

 

반응형

Jan 10 2024 Introducing ChatGPT Team

2024. 1. 12. 09:45 | Posted by 솔웅


반응형

https://openai.com/blog/introducing-chatgpt-team

 

Introducing ChatGPT Team

We’re launching a new ChatGPT plan for teams of all sizes, which provides a secure, collaborative workspace to get the most out of ChatGPT at work.

openai.com

 

 

Introducing ChatGPT Team

We’re launching a new ChatGPT plan for teams of all sizes, which provides a secure, collaborative workspace to get the most out of ChatGPT at work.

 

 

 

We launched ChatGPT Enterprise a few months ago and industry leaders like Block, Canva, Carlyle, The Estée Lauder Companies, PwC, and Zapier are already using it to redefine how their organizations operate. Today, we’re adding a new self-serve plan: ChatGPT Team.

 

우리는 몇 달 전에 ChatGPT Enterprise를 출시했으며 Block, Canva, Carlyle, The Estée Lauder Companies, PwC, Zapier와 같은 업계 리더들은 이미 ChatGPT Enterprise를 사용하여 조직 운영 방식을 재정의하고 있습니다. 오늘 우리는 새로운 셀프 서비스 계획인 ChatGPT 팀을 추가합니다.

 

 

ChatGPT Team offers access to our advanced models like GPT-4 and DALL·E 3, and tools like Advanced Data Analysis. It additionally includes a dedicated collaborative workspace for your team and admin tools for team management. As with ChatGPT Enterprise, you own and control your business data—we do not train on your business data or conversations, and our models don’t learn from your usage. More details on our data privacy practices can be found on our privacy page and Trust Portal.

 

ChatGPT 팀은 GPT-4 및 DALL·E 3과 같은 고급 모델과 고급 데이터 분석과 같은 도구에 대한 액세스를 제공합니다. 또한 팀을 위한 전용 공동 작업 공간과 팀 관리를 위한 관리 도구가 포함되어 있습니다. ChatGPT Enterprise와 마찬가지로 귀하는 귀하의 비즈니스 데이터를 소유하고 제어합니다. 당사는 귀하의 비즈니스 데이터나 대화를 학습하지 않으며 당사 모델은 귀하의 사용법을 학습하지 않습니다. 당사의 데이터 개인정보 보호 관행에 대한 자세한 내용은 당사의 개인정보 보호 페이지와 Trust Portal에서 확인할 수 있습니다.

 

 

ChatGPT Team includes:

  • Access to GPT-4 with 32K context window
  • 32K 컨텍스트 창을 통해 GPT-4에 액세스
  • Tools like DALL·E 3, GPT-4 with Vision, Browsing, Advanced Data Analysis—with higher message caps
  • DALL·E 3, 비전, 브라우징, 고급 데이터 분석 기능을 갖춘 GPT-4와 같은 도구(메시지 한도가 더 높음)
  • No training on your business data or conversations
  • 비즈니스 데이터나 대화에 대한 training  이 없습니다.
  • Secure workspace for your team
  • 팀을 위한 안전한 작업 공간
  • Create and share custom GPTs with your workspace
  • 작업공간에서 맞춤 GPT를 만들고 공유하세요.
  • Admin console for workspace and team management
  • 작업공간 및 팀 관리를 위한 관리 콘솔
  • Early access to new features and improvements
  • 새로운 기능 및 개선 사항에 대한 조기 액세스

 

 
 

 

Customize ChatGPT for any type of work

 

We recently announced GPTs—custom versions of ChatGPT that you can create for a specific purpose with instructions, expanded knowledge, and custom capabilities. These can be especially useful for businesses and teams. With GPTs, you can customize ChatGPT to your team’s specific needs and workflows (no code required) and publish them securely to your team’s workspace. GPTs can help with a wide range of tasks, such as assisting in project management, team onboarding, generating code, performing data analysis, securely taking action in your existing systems and tools, or creating collateral to match your brand tone and voice. Today, we announced the GPT Store where you can find useful and popular GPTs from your workspace.

 

우리는 최근 GPTs를 발표했습니다. 이는 지침, 확장된 지식, 사용자 정의 기능을 통해 특정 목적을 위해 만들 수 있는 ChatGPT의 사용자 정의 버전입니다. 이는 비즈니스와 팀에 특히 유용할 수 있습니다. GPTs를 사용하면 ChatGPT를 팀의 특정 요구 사항과 작업 흐름에 맞게 맞춤설정하고(코드 필요 없음) 팀의 작업 공간에 안전하게 게시할 수 있습니다. GPTs는 프로젝트 관리 지원, 팀 온보딩, 코드 생성, 데이터 분석 수행, 기존 시스템 및 도구에서 안전하게 조치 수행, 브랜드 톤 및 목소리에 맞는 자료 생성 등 다양한 작업에 도움을 줄 수 있습니다. 오늘 우리는 귀하의 작업 공간에서 유용하고 인기 있는 GPTs를 찾을 수 있는 GPT 스토어를 발표했습니다.

 

 

Improve team efficiency and work quality

 

Integrating AI into everyday organizational workflows can make your team more productive. In a recent study by the Harvard Business School, employees at Boston Consulting Group who were given access to GPT-4 reported completing tasks 25% faster and achieved a 40% higher quality in their work as compared to their peers who did not have access.

 

AI를 일상적인 조직 워크플로에 통합하면 팀의 생산성이 향상될 수 있습니다. Harvard Business School의 최근 연구에 따르면 GPT-4에 대한 액세스 권한이 부여된 Boston Consulting Group의 직원은 액세스 권한이 없는 동료에 비해 작업을 25% 더 빨리 완료하고 작업 품질이 40% 더 높은 것으로 나타났습니다. 

 

 

Connor O’Brien, VP of GTM Strategy & Operations at Sourcegraph, shares, "We use ChatGPT in almost every part of our business, from financial modeling for pricing and packaging to internal and external communications to board prep to recruiting and note taking—it’s accelerated everything we do allowing us to execute at a high level."

 

Sourcegraph의 GTM 전략 및 운영 부사장인 Connor O'Brien은 다음과 같이 말합니다. "우리는 가격 책정 및 패키징을 위한 재무 모델링부터 내부 및 외부 커뮤니케이션, 이사회 준비, 채용 및 메모 작성에 이르기까지 비즈니스의 거의 모든 부분에서 ChatGPT를 사용합니다. 우리가 하는 모든 일을 가속화하여 높은 수준에서 실행할 수 있게 되었습니다."

 

Dr. John Brownstein, Chief Innovation Officer at Boston Children’s Hospital says, “With ChatGPT Team, we’ve been able to pilot innovative GPTs that enhance our team’s productivity and collaboration. As we integrate GPTs safely and responsibly across internal operations, we know the transformative impact this will have in strengthening the systems that enable our doctors, researchers, students, and administrative staff to provide exceptional care to every patient that walks through our doors.”

 

Boston Children’s Hospital의 최고 혁신 책임자인 Dr. John Brownstein은 이렇게 말합니다. “ChatGPT 팀을 통해 우리는 팀의 생산성과 협업을 향상시키는 혁신적인 GPT를 시험해 볼 수 있었습니다. 내부 운영 전반에 걸쳐 GPT를 안전하고 책임감 있게 통합함으로써 우리는 의사, 연구원, 학생 및 행정 직원이 우리 문을 방문하는 모든 환자에게 탁월한 진료를 제공할 수 있는 시스템을 강화하는 데 혁신적인 영향을 미칠 것임을 알고 있습니다.”

 

ChatGPT Team costs $25/month per user when billed annually, or $30/month per user when billed monthly. You can explore the details or get started now by upgrading in your ChatGPT settings.

 

ChatGPT 팀의 비용은 연간 청구 시 사용자당 월 $25, 월별 청구 시 사용자당 월 $30입니다. ChatGPT 설정에서 업그레이드하여 세부 정보를 살펴보거나 지금 시작할 수 있습니다.

 

 

Introducing ChatGPT Team

We’re launching a new ChatGPT plan for teams of all sizes, which provides a secure, collaborative workspace to get the most out of ChatGPT at work.

openai.com

 

 

 

 

 

 

 

반응형

Jan 10 2024 Introducing the GPT Store

2024. 1. 12. 09:38 | Posted by 솔웅


반응형

https://openai.com/blog/introducing-the-gpt-store

 

Introducing the GPT Store

We’re launching the GPT Store to help you find useful and popular custom versions of ChatGPT.

openai.com

 

Introducing the GPT Store

We’re launching the GPT Store to help you find useful and popular custom versions of ChatGPT.

 

Explore GPTs

 

 

 

 

 

It’s been two months since we announced GPTs, and users have already created over 3 million custom versions of ChatGPT. Many builders have shared their GPTs for others to use. Today, we're starting to roll out the GPT Store to ChatGPT Plus, Team and Enterprise users so you can find useful and popular GPTs. Visit chat.openai.com/gpts to explore.

 

GPTs 를 발표한 지 두 달이 지났고 사용자는 이미 300만 개 이상의 ChatGPT 맞춤 버전을 만들었습니다. 많은 빌더가 다른 사람들이 사용할 수 있도록 GPTs 를 공유했습니다. 오늘 우리는 유용하고 인기 있는 GPTs 를 찾을 수 있도록 ChatGPT Plus, Team 및 Enterprise 사용자에게 GPT Store를 출시하기 시작했습니다. chat.openai.com/gpts를 방문하여 살펴보세요.

 

 

The store features a diverse range of GPTs developed by our partners and the community. Browse popular and trending GPTs on the community leaderboard, with categories like DALL·E, writing, research, programming, education, and lifestyle.

 

이 스토어에는 파트너와 커뮤니티가 개발한 다양한 GPT가 있습니다. 커뮤니티 리더보드에서 DALL·E, 글쓰기, 연구, 프로그래밍, 교육, 라이프스타일과 같은 카테고리를 포함하는 인기 있고 인기 있는 GPT를 찾아보세요.

 

 

 

We will also highlight useful and impactful GPTs. Some of our first featured GPTs include:

 

또한 유용하고 영향력 있는 GPT를 강조하겠습니다. 첫 번째 주요 GPT 중 일부는 다음과 같습니다.

 

  • Personalized trail recommendations from AllTrails
  • Search and synthesize results from 200M academic papers with Consensus
  • Expand your coding skills with Khan Academy’s Code Tutor
  • Design presentations or social posts with Canva
  • Find your next read with Books
  • Learn math and science anytime, anywhere with the CK-12 Flexi AI tutor

Code Tutor Khan Academy의 Code Tutor로 코딩 기술을 확장하세요

 

ChatGPT - Code Tutor

Let's code together! I'm Khanmigo Lite, by Khan Academy. I won't write the code for you, but I'll help you work things out. Can you tell me the challenge you're working on?

chat.openai.com

Books로 다음 읽을거리를 찾아보세요

 

ChatGPT - Books

Your AI guide in the world of literature and reading.

chat.openai.com

 

ChatGPT - CK-12 Flexi

The world’s most powerful math and science AI Tutor for middle and high school students.

chat.openai.com

 

 

Include your GPT in the store

 

Building your own GPT is simple and doesn't require any coding skills.

 

자신만의 GPT를 구축하는 것은 간단하며 코딩 기술이 필요하지 않습니다.

 

 

If you’d like to share a GPT in the store, you’ll need to: 스토어에서 GPT를 공유하려면 다음을 수행해야 합니다.

  1. Save your GPT for Everyone (Anyone with a link will not be shown in the store).

    모든 사람을 위해 GPT를 저장하세요. 링크가 있는 사람은 누구나 스토어에 표시되지 않습니다.

  2. Verify your Builder Profile (Settings  Builder profile  Enable your name or a verified website).

    빌더 프로필을 확인하세요(설정 → 빌더 프로필 → 이름 또는 확인된 웹사이트 활성화).

 

 

Please review our latest usage policies and GPT brand guidelines to ensure your GPT is compliant. To help ensure GPTs adhere to our policies, we've established a new review system in addition to the existing safety measures we've built into our products. The review process includes both human and automated review. Users are also able to report GPTs.

 

귀하의 GPT가 규정을 준수하는지 확인하려면 최신 사용 정책과 GPT 브랜드 가이드라인을 검토하세요. GPT가 Google 정책을 준수할 수 있도록 Google에서는 제품에 적용한 기존 안전 조치 외에 새로운 검토 시스템을 구축했습니다. 검토 프로세스에는 사람이 수행하는 검토와 자동 검토가 모두 포함됩니다. 사용자는 GPT를 보고할 수도 있습니다.

 

 

Builders can earn based on GPT usage

 

In Q1 we will launch a GPT builder revenue program. As a first step, US builders will be paid based on user engagement with their GPTs. We'll provide details on the criteria for payments as we get closer.

 

1분기에는 GPT 빌더 수익 프로그램을 시작할 예정입니다. 첫 번째 단계로 미국 builders  는 GPT에 대한 사용자 참여를 기반으로 비용을 지불받습니다. 결제 기준에 대한 자세한 내용은 추후 확정되는 대로 안내해 드리겠습니다.

 

 

Team and Enterprise customers can manage GPTs

Today, we announced our new ChatGPT Team plan for teams of all sizes. Team customers have access to a private section of the GPT Store which includes GPTs securely published to your workspace. The GPT Store will be available soon for ChatGPT Enterprise customers and will include enhanced admin controls like choosing how internal-only GPTs are shared and which external GPTs may be used inside your business. Like all usage on ChatGPT Team and Enterprise, we do not use your conversations with GPTs to improve our models.

 

오늘 우리는 모든 규모의 팀을 위한 새로운 ChatGPT 팀 계획을 발표했습니다. 팀 고객은 작업공간에 안전하게 게시된 GPT가 포함된 GPT 스토어의 비공개 섹션에 액세스할 수 있습니다. GPT 스토어는 곧 ChatGPT Enterprise 고객에게 제공될 예정이며 내부 전용 GPT 공유 방법 및 비즈니스 내에서 사용할 수 있는 외부 GPT 선택과 같은 향상된 관리 제어 기능이 포함됩니다. ChatGPT Team 및 Enterprise의 모든 사용과 마찬가지로 우리는 모델을 개선하기 위해 GPT와의 대화를 사용하지 않습니다.

 

Explore GPTs at chat.openai.com/gpts. chat.openai.com/gpts에서 GPTs를 살펴보세요.

 
 

 

 

반응형

Jan 8 2024 OpenAI and journalism

2024. 1. 12. 09:24 | Posted by 솔웅


반응형

https://openai.com/blog/openai-and-journalism

 

OpenAI and journalism

We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit.

openai.com

 

 

OpenAI and journalism

We support journalism, partner with news organizations, and believe The New York Times lawsuit is without merit.

 

 

 

Our goal is to develop AI tools that empower people to solve problems that are otherwise out of reach. People worldwide are already using our technology to improve their daily lives. Millions of developers and more than 92% of Fortune 500 are building on our products today.

 

우리의 목표는 사람들이 다른 방법으로는 접근할 수 없는 문제를 해결할 수 있도록 지원하는 AI 도구를 개발하는 것입니다. 전 세계 사람들은 이미 일상 생활을 개선하기 위해 우리의 기술을 사용하고 있습니다. 오늘날 수백만 명의 개발자와 Fortune 500대 기업 중 92% 이상이 우리 제품을 기반으로 개발하고 있습니다.

 

 

While we disagree with the claims in The New York Times lawsuit, we view it as an opportunity to clarify our business, our intent, and how we build our technology. Our position can be summed up in these four points, which we flesh out below:

 

우리는 New York Times 소송의 주장에 동의하지 않지만, 이를 우리의 비즈니스, 의도, 기술 구축 방법을 명확히 할 수 있는 기회로 봅니다. 우리의 입장은 다음 네 가지로 요약될 수 있으며, 아래에서 구체적으로 설명하겠습니다.

 

 

  1. We collaborate with news organizations and are creating new opportunities

    우리는 언론 기관과 협력하여 새로운 기회를 창출하고 있습니다.

  2. Training is fair use, but we provide an opt-out because it’s the right thing to do

    Training  은 공정한 사용이지 올바른 일이기 때문에 거부할 수 있는 옵션을 제공합니다.

  3. “Regurgitation” is a rare bug that we are working to drive to zero

    "Regurgitation 역류, 표"는 우리가 제로화하기 위해 노력하고 있는 희귀한 버그입니다.

  4. The New York Times is not telling the full story

    New York Times는 전체 내용을 말하지 않습니다.

1. We collaborate with news organizations and are creating new opportunities

 

1. 우리는 언론사와 협력하여 새로운 기회를 창출하고 있습니다.

 

 

We work hard in our technology design process to support news organizations. We’ve met with dozens, as well as leading industry organizations like the News/Media Alliance, to explore opportunities, discuss their concerns, and provide solutions. We aim to learn, educate, listen to feedback, and adapt.

 

우리는 언론사를 지원하기 위해 기술 설계 프로세스에 열심히 노력하고 있습니다. 우리는 기회를 탐색하고, 우려 사항을 논의하고, 솔루션을 제공하기 위해 뉴스/미디어 연합(News/Media Alliance)과 같은 선도적인 업계 조직뿐만 아니라 수십 곳을 만났습니다. 우리는 배우고, 교육하고, 피드백을 듣고, 적응하는 것을 목표로 합니다.

 

Our goals are to support a healthy news ecosystem, be a good partner, and create mutually beneficial opportunities. With this in mind, we have pursued partnerships with news organizations to achieve these objectives:

 

우리의 목표는 건강한 뉴스 생태계를 지원하고, 좋은 파트너가 되며, 상호 이익이 되는 기회를 창출하는 것입니다. 이를 염두에 두고 우리는 다음과 같은 목표를 달성하기 위해 언론 기관과 파트너십을 추구해 왔습니다.

 

  1. Deploy our products to benefit and support reporters and editors, by assisting with time-consuming tasks like analyzing voluminous public records and translating stories.

    방대한 공공 기록 분석 및 기사 번역과 같이 시간이 많이 걸리는 작업을 지원하여 기자와 편집자에게 혜택을 주고 지원하기 위해 제품을 배포합니다.

  2. Teach our AI models about the world by training on additional historical, non-publicly available content.

    추가로 비공개로 제공되는 역사적 콘텐츠를 훈련하여 AI 모델에 세상에 대해 가르칩니다.

  3. Display real-time content with attribution in ChatGPT, providing new ways for news publishers to connect with readers.

    ChatGPT에 속성이 포함된 실시간 콘텐츠를 표시하여 뉴스 게시자가 독자와 연결할 수 있는 새로운 방법을 제공합니다.

Our early partnerships with the Associated Press, Axel Springer, American Journalism Project and NYU offer a glimpse into our approach.

 

Associated Press, Axel Springer, American Journalism Project 및 NYU와의 초기 파트너십을 통해 우리의 접근 방식을 엿볼 수 있습니다.

 

 

2. Training is fair use, but we provide an opt-out because it’s the right thing to do

 

2. Training은 공정한 사용이지만 옳은 일이기 때문에 거부할 수 있는 옵션을 제공합니다.

 

Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness.

 

공개적으로 이용 가능한 인터넷 자료를 사용하여 AI 모델을 훈련시키는 것은 오랫동안 지속되고 널리 받아들여지는 선례에 의해 뒷받침되는 공정 사용입니다. 우리는 이 원칙이 창작자에게는 공정하고 혁신가에게는 필요하며 미국 경쟁력에 매우 중요하다고 생각합니다.

 

The principle that training AI models is permitted as a fair use is supported by a wide range of academics, library associations, civil society groups, startups, leading US companies, creators, authors, and others that recently submitted comments to the US Copyright Office. Other regions and countries, including the European Union, Japan, Singapore, and Israel also have laws that permit training models on copyrighted content—an advantage for AI innovation, advancement, and investment.

 

AI 모델 훈련이 공정한 사용으로 허용된다는 원칙은 최근 미국 저작권청에 의견을 제출한 다양한 학계, 도서관 협회, 시민 사회 단체, 스타트업, 미국 선도 기업, 창작자, 작가 및 기타 사람들에 의해 지지됩니다. 유럽 연합, 일본, 싱가포르, 이스라엘을 포함한 다른 지역 및 국가에도 저작권이 있는 콘텐츠에 대한 훈련 모델을 허용하는 법률이 있습니다. 이는 AI 혁신, 발전 및 투자에 유리합니다.

 

That being said, legal right is less important to us than being good citizens. We have led the AI industry in providing a simple opt-out process for publishers (which The New York Times adopted in August 2023) to prevent our tools from accessing their sites.

 

즉, 법적 권리는 좋은 시민이 되는 것보다 우리에게 덜 중요합니다. Google은 Google 도구가 사이트에 액세스하지 못하도록 게시자에게 간단한 거부 프로세스(2023년 8월 New York Times 채택)를 제공함으로써 AI 업계를 주도해 왔습니다.

 

 

3. “Regurgitation” is a rare bug that we are working to drive to zero

 

3. “Regurgitation, 표절, 역류, 토해내”는 우리가 제로화하기 위해 노력하고 있는 희귀한 버그입니다.

 

Our models were designed and trained to learn concepts in order to apply them to new problems.

 

우리의 모델은 개념을 새로운 문제에 적용하기 위해 학습하도록 설계되고 훈련되었습니다.

 

Memorization is a rare failure of the learning process that we are continually making progress on, but it’s more common when particular content appears more than once in training data, like if pieces of it appear on lots of different public websites. So we have measures in place to limit inadvertent memorization and prevent regurgitation in model outputs. We also expect our users to act responsibly; intentionally manipulating our models to regurgitate is not an appropriate use of our technology and is against our terms of use.

 

암기 Memorization  는 우리가 지속적으로 발전하고 있는 학습 과정에서 드문 실패이지만, 특정 콘텐츠가 여러 공개 웹사이트에 나타나는 것처럼 특정 콘텐츠가 훈련 데이터에 두 번 이상 나타날 때 더 일반적입니다. 따라서 우리는 의도하지 않은 암기 Memorization   를 제한하고 모델 출력의 역류 Regurgitation 를 방지하기 위한 조치를 취했습니다. 또한 우리는 사용자가 책임감 있게 행동할 것을 기대합니다. 의도적으로 모델을 조작하여 역류 Regurgitation  시키는 것은 당사 기술의 적절한 사용이 아니며 당사 이용 약관에 위배됩니다.

 

Just as humans obtain a broad education to learn how to solve new problems, we want our AI models to observe the range of the world’s information, including from every language, culture, and industry. Because models learn from the enormous aggregate of human knowledge, any one sector—including news—is a tiny slice of overall training data, and any single data source—including The New York Times—is not significant for the model’s intended learning.

 

인간이 새로운 문제를 해결하는 방법을 배우기 위해 광범위한 교육을 받는 것처럼 우리는 AI 모델이 모든 언어, 문화, 산업을 포함하여 전 세계의 다양한 정보를 관찰하기를 원합니다. 모델은 인간 지식의 막대한 집합체로부터 학습하기 때문에 뉴스를 포함한 모든 한 부문은 전체 교육 데이터의 작은 조각이며 New York Times를 포함한 단일 데이터 소스는 모델의 의도된 학습에 중요하지 않습니다.

 

4. The New York Times is not telling the full story

 

4. 뉴욕타임스는 전체 내용을 말하지 않습니다.

 

Our discussions with The New York Times had appeared to be progressing constructively through our last communication on December 19. The negotiations focused on a high-value partnership around real-time display with attribution in ChatGPT, in which The New York Times would gain a new way to connect with their existing and new readers, and our users would gain access to their reporting. We had explained to The New York Times that, like any single source, their content didn't meaningfully contribute to the training of our existing models and also wouldn't be sufficiently impactful for future training. Their lawsuit on December 27—which we learned about by reading The New York Times—came as a surprise and disappointment to us.

 

The New York Times와의 논의는 12월 19일 마지막 커뮤니케이션을 통해 건설적으로 진행되는 것으로 나타났습니다. 협상은 ChatGPT의 속성이 포함된 실시간 디스플레이를 중심으로 한 고부가가치 파트너십에 초점을 맞췄으며, 이로 인해 The New York Times는 새로운 이점을 얻게 됩니다. 기존 및 신규 독자와 연결하는 방법이며 사용자는 보고에 액세스할 수 있습니다. 우리는 다른 단일 소스와 마찬가지로 해당 콘텐츠가 기존 모델의 교육에 의미 있게 기여하지 않았으며 향후 교육에도 충분한 영향을 미치지 않을 것이라고 The New York Times에 설명했습니다. New York Times를 읽으면서 알게 된 12월 27일의 소송은 우리에게 놀라움과 실망으로 다가왔습니다.

 

Along the way, they had mentioned seeing some regurgitation of their content but repeatedly refused to share any examples, despite our commitment to investigate and fix any issues. We’ve demonstrated how seriously we treat this as a priority, such as in July when we took down a ChatGPT feature immediately after we learned it could reproduce real-time content in unintended ways.

 

그 과정에서 그들은 콘텐츠가 일부 역류 Regurgitation 되는 것을 언급했지만 문제를 조사하고 수정하겠다는 우리의 노력에도 불구하고 어떤 사례도 공유하기를 반복적으로 거부했습니다. 우리는 의도하지 않은 방식으로 실시간 콘텐츠를 재현할 수 있다는 사실을 알게 된 직후 ChatGPT 기능을 중단한 7월과 같이 이를 얼마나 진지하게 우선순위로 취급하는지 보여주었습니다.

 

Interestingly, the regurgitations The New York Times induced appear to be from years-old articles that have proliferated on multiple third-party websites. It seems they intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate. Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts.

 

흥미롭게도 New York Times가 유발한 역류 Regurgitation 는 여러 제3자 웹사이트에 확산된 오래된 기사에서 나온 것으로 보입니다. 우리 모델이 역류 Regurgitation 하도록 하기 위해 종종 긴 기사 발췌를 포함하여 프롬프트를 의도적으로 조작한 것 같습니다. 이러한 프롬프트를 사용할 때에도 우리 모델은 일반적으로 The New York Times가 암시하는 방식으로 동작하지 않습니다. 이는 모델이 모델에 역류 Regurgitation 하도록 지시했거나 여러 시도에서 사례를 선별했음을 의미합니다.

 

Despite their claims, this misuse is not typical or allowed user activity, and is not a substitute for The New York Times. Regardless, we are continually making our systems more resistant to adversarial attacks to regurgitate training data, and have already made much progress in our recent models.

 

그들의 주장에도 불구하고 이러한 오용은 일반적이지 않거나 허용되는 사용자 활동이 아니며 The New York Times를 대체할 수 없습니다. 그럼에도 불구하고 우리는 훈련 데이터를 역류 Regurgitation 시키는 적대적 공격에 대한 시스템 저항력을 높이기 위해 지속적으로 노력하고 있으며 이미 최근 모델에서 많은 진전을 이루었습니다.

 

We regard The New York Times’ lawsuit to be without merit. Still, we are hopeful for a constructive partnership with The New York Times and respect its long history, which includes reporting the first working neural network over 60 years ago and championing First Amendment freedoms.

 

우리는 New York Times의 소송이 가치가 없다고 생각합니다. 그럼에도 불구하고 우리는 The New York Times와의 건설적인 파트너십을 희망하며 60년 전 최초로 작동하는 신경망을 보고하고 수정헌법 제1조의 자유를 옹호하는 등의 오랜 역사를 존중합니다.

 

We look forward to continued collaboration with news organizations, helping elevate their ability to produce quality journalism by realizing the transformative potential of AI.

 

우리는 AI의 혁신적인 잠재력을 실현하여 양질의 저널리즘을 생산하는 능력을 향상시키는 언론 기관과의 지속적인 협력을 기대합니다.

 

 

 

 

 

 

 

 

반응형

Dec 14, 2023 Superalignment Fast Grants

2023. 12. 19. 03:45 | Posted by 솔웅


반응형

https://openai.com/blog/superalignment-fast-grants

 

Superalignment Fast Grants

We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more.

openai.com

 

Superalignment Fast Grants

We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, and more.

 

우리는 약-강 일반화, 해석 가능성, 확장 가능한 감독 등을 포함하여 초인적 AI 시스템의 정렬과 안전을 향한 기술 연구를 지원하기 위해 1,000만 달러의 보조금을 지급할 예정입니다.

 

We believe superintelligence could arrive within the next 10 years. These AI systems would have vast capabilities—they could be hugely beneficial, but also potentially pose large risks.

 

우리는 초지능이 앞으로 10년 안에 도래할 것이라고 믿습니다. 이러한 AI 시스템은 방대한 기능을 갖추고 있어 큰 이점을 제공할 수 있지만 잠재적으로 큰 위험을 초래할 수도 있습니다.

 

Today, we align AI systems to ensure they are safe using reinforcement learning from human feedback (RLHF). However, aligning future superhuman AI systems will pose fundamentally new and qualitatively different technical challenges. 

 

현재 우리는 인간 피드백(RLHF)을 통한 강화 학습을 사용하여 AI 시스템이 안전한지 확인하도록 조정합니다. 그러나 미래의 초인적 AI 시스템을 조정하는 것은 근본적으로 새롭고 질적으로 다른 기술적 과제를 제기할 것입니다.

 

Superhuman AI systems will be capable of complex and creative behaviors that humans cannot fully understand. For example, if a superhuman model generates a million lines of extremely complicated code, humans will not be able to reliably evaluate whether the code is safe or dangerous to execute. Existing alignment techniques like RLHF that rely on human supervision may no longer be sufficient. This leads to the fundamental challenge: how can humans steer and trust AI systems much smarter than them? 

 

초인적 AI 시스템은 인간이 완전히 이해할 수 없는 복잡하고 창의적인 행동을 수행할 수 있게 될 것입니다. 예를 들어, 초인적 모델이 수백만 줄의 극도로 복잡한 코드를 생성한다면 인간은 코드가 실행하기에 안전한지 아니면 위험한지 확실하게 평가할 수 없습니다. 사람의 감독에 의존하는 RLHF와 같은 기존 정렬 기술로는 더 이상 충분하지 않을 수 있습니다. 이는 근본적인 과제로 이어집니다. 인간이 어떻게 AI 시스템을 인간보다 훨씬 더 똑똑하게 조종하고 신뢰할 수 있습니까?

 

This is one of the most important unsolved technical problems in the world. But we think it is solvable with a concerted effort. There are many promising approaches and exciting directions, with lots of low-hanging fruit. We think there is an enormous opportunity for the ML research community and individual researchers to make major progress on this problem today. 

 

이는 세계에서 가장 중요한 미해결 기술 문제 중 하나입니다. 하지만 우리는 공동의 노력으로 이 문제를 해결할 수 있다고 생각합니다. 많은 유망한 접근 방식과 흥미로운 방향이 있으며, 쉽게 얻을 수 있는 성과도 많습니다. 우리는 오늘날 ML 연구 커뮤니티와 개별 연구자가 이 문제에 대해 큰 진전을 이룰 수 있는 엄청난 기회가 있다고 생각합니다.

 

As part of our Superalignment project, we want to rally the best researchers and engineers in the world to meet this challenge—and we’re especially excited to bring new people into the field.

 

Superalignment 프로젝트의 일환으로 우리는 이 과제를 해결하기 위해 세계 최고의 연구원과 엔지니어를 모으고 싶습니다. 특히 새로운 사람들을 현장에 데려오게 되어 기쁩니다.

 

Superalignment Fast Grants

In partnership with Eric Schmidt, we are launching a $10M grants program to support technical research towards ensuring superhuman AI systems are aligned and safe:

 

우리는 Eric Schmidt와 협력하여 초인적인 AI 시스템을 정렬하고 안전하게 유지하기 위한 기술 연구를 지원하기 위해 1,000만 달러의 보조금 프로그램을 시작합니다.

 

  • We are offering $100K–$2M grants for academic labs, nonprofits, and individual researchers.
  • 우리는 학술 연구실, 비영리 단체 및 개인 연구자에게 10만~200만 달러의 보조금을 제공하고 있습니다.
  • For graduate students, we are sponsoring a one-year $150K OpenAI Superalignment Fellowship: $75K in stipend and $75K in compute and research funding.
  • 대학원생을 위해 우리는 1년 동안 $150,000의 OpenAI Superalignment Fellowship을 후원합니다(급여 $75,000, 컴퓨팅 및 연구 자금 $75,000).
  • No prior experience working on alignment is required; we are actively looking to support researchers who are excited to work on alignment for the first time.
  • 정렬 작업에 대한 사전 경험은 필요하지 않습니다. 우리는 처음으로 정렬 작업을 하게 된 연구자들을 적극적으로 지원하기 위해 노력하고 있습니다.
  • Our application process is simple, and we’ll get back to you within four weeks of applications closing. 
  • 우리의 신청 절차는 간단하며, 신청 마감 후 4주 이내에 연락드리겠습니다.

 

With these grants, we are particularly interested in funding the following research directions:

 

이러한 보조금을 통해 우리는 특히 다음 연구 방향에 자금을 지원하는 데 관심이 있습니다.

  • Weak-to-strong generalization: Humans will be weak supervisors relative to superhuman models. Can we understand and control how strong models generalize from weak supervision
  • 약한 대 강한 일반화: 인간은 초인간 모델에 비해 약한 감독자가 될 것입니다. 약한 감독으로 인해 강력한 모델이 일반화되는 방식을 이해하고 제어할 수 있나요?
  • Interpretability: How can we understand model internals? And can we use this to e.g. build an AI lie detector?
  • 해석성: 모델 내부를 어떻게 이해할 수 있습니까? 그리고 이것을 다음과 같은 용도로 사용할 수 있습니까? AI 거짓말 탐지기를 만들까?
  • Scalable oversight: How can we use AI systems to assist humans in evaluating the outputs of other AI systems on complex tasks?
  • 확장 가능한 감독: 인간이 복잡한 작업에 대해 다른 AI 시스템의 결과를 평가할 수 있도록 AI 시스템을 어떻게 사용할 수 있습니까?
  • Many other research directions, including but not limited to: honesty, chain-of-thought faithfulness, adversarial robustness, evals and testbeds, and more.
  • 정직성, 사고방식의 충실성, 적대적 견고성, 평가 및 테스트베드 등을 포함하되 이에 국한되지 않는 다양한 연구 방향.

 

For more on the research directions, FAQs, and other details, see our Superalignment Fast Grants page.

 

연구 방향, FAQ 및 기타 세부 사항에 대한 자세한 내용은 Superalignment Fast Grants 페이지를 참조하세요.

 

Join us in this challenge

We think new researchers could make enormous contributions! This is a young field with many tractable research problems; outstanding contributions could not just help shape the field, but be critical for the future of AI. There has never been a better time to start working on alignment.

 

우리는 새로운 연구자들이 엄청난 기여를 할 수 있다고 생각합니다! 이것은 다루기 쉬운 연구 문제가 많은 젊은 분야입니다. 탁월한 기여는 해당 분야를 형성하는 데 도움이 될 뿐만 아니라 AI의 미래에 매우 중요합니다. 정렬 작업을 시작하기에 이보다 더 좋은 때는 없었습니다.

 

 

 

반응형


반응형

https://openai.com/blog/axel-springer-partnership

 

Partnership with Axel Springer to deepen beneficial use of AI in journalism

Axel Springer is the first publishing house globally to partner with us on a deeper integration of journalism in AI technologies.

openai.com

 

 

Partnership with Axel Springer to deepen beneficial use of AI in journalism                         

Axel Springer is the first publishing house globally to partner with us on a deeper integration of journalism in AI technologies.

 

 

This news was originally shared by Axel Springer and can also be read here.

 

이 소식은 원래 Axel Springer가 공유했으며 여기에서도 읽을 수 있습니다.

 

Axel Springer is the first publishing house globally to partner with OpenAI on a deeper integration of journalism in AI technologies.

 

Axel Springer는 저널리즘과 AI 기술의 심층 통합을 위해 OpenAI와 파트너십을 맺은 전 세계 최초의 출판사입니다.

 

Axel Springer and OpenAI have announced a global partnership to strengthen independent journalism in the age of artificial intelligence (AI). The initiative will enrich users’ experience with ChatGPT by adding recent and authoritative content on a wide variety of topics, and explicitly values the publisher’s role in contributing to OpenAI’s products. This marks a significant step in both companies’ commitment to leverage AI for enhancing content experiences and creating new financial opportunities that support a sustainable future for journalism.   

 

Axel Springer와 OpenAI가 인공지능(AI) 시대에 독립적인 저널리즘을 강화하기 위한 글로벌 파트너십을 발표했습니다. 이 이니셔티브는 다양한 주제에 대한 권위 있는 최신 콘텐츠를 추가하여 ChatGPT에 대한 사용자 경험을 풍부하게 하고 OpenAI 제품에 기여하는 게시자의 역할을 명시적으로 높이 평가합니다. 이는 콘텐츠 경험을 향상하고 저널리즘의 지속 가능한 미래를 지원하는 새로운 재정적 기회를 창출하기 위해 AI를 활용하려는 두 회사의 약속에서 중요한 단계입니다.

 

With this partnership, ChatGPT users around the world will receive summaries of selected global news content from Axel Springer’s media brands including POLITICO, BUSINESS INSIDER, and European properties BILD and WELT, including otherwise paid content. ChatGPT’s answers to user queries will include attribution and links to the full articles for transparency and further information.  

 

이 파트너십을 통해 전 세계 ChatGPT 사용자는 유료 콘텐츠를 포함하여 POLITICO, BUSINESS INSIDER, 유럽 자산 BILD 및 WELT를 포함한 Axel Springer의 미디어 브랜드에서 선택된 글로벌 뉴스 콘텐츠의 요약을 받게 됩니다. 사용자 쿼리에 대한 ChatGPT의 답변에는 투명성과 추가 정보를 위해 전체 기사에 대한 출처 및 링크가 포함됩니다.

 

In addition, the partnership supports Axel Springer’s existing AI-driven ventures that build upon OpenAI’s technology. The collaboration also involves the use of quality content from Axel Springer media brands for advancing the training of OpenAI’s sophisticated large language models.

 

또한 이번 파트너십은 OpenAI 기술을 기반으로 구축된 Axel Springer의 기존 AI 기반 벤처를 지원합니다. 또한 이번 협력에는 OpenAI의 정교한 대규모 언어 모델 교육을 발전시키기 위해 Axel Springer 미디어 브랜드의 고품질 콘텐츠를 사용하는 것도 포함됩니다.

 

We are excited to have shaped this global partnership between Axel Springer and OpenAI – the first of its kind. We want to explore the opportunities of AI empowered journalism – to bring quality, societal relevance and the business model of journalism to the next level.

우리는 Axel Springer와 OpenAI 간의 최초의 글로벌 파트너십을 구축하게 된 것을 기쁘게 생각합니다. 우리는 AI 기반 저널리즘의 기회를 탐색하여 저널리즘의 품질, 사회적 관련성 및 비즈니스 모델을 한 단계 끌어올리고 싶습니다.

 

Mathias Döpfner, CEO of Axel Springer

 

“This partnership with Axel Springer will help provide people with new ways to access quality, real-time news content through our AI tools. We are deeply committed to working with publishers and creators around the world and ensuring they benefit from advanced AI technology and new revenue models,” says Brad Lightcap, COO of OpenAI.

 

“Axel Springer와의 이번 파트너십은 사람들에게 AI 도구를 통해 고품질의 실시간 뉴스 콘텐츠에 액세스할 수 있는 새로운 방법을 제공하는 데 도움이 될 것입니다. 우리는 전 세계 출판사 및 창작자들과 협력하여 이들이 첨단 AI 기술과 새로운 수익 모델의 혜택을 누릴 수 있도록 최선을 다하고 있습니다.”라고 OpenAI의 COO인 Brad Lightcap은 말합니다.

 

About Axel Springer

Axel Springer is a media and technology company active in more than 40 countries. By providing information across its diverse media brands (among others BILD, WELT, INSIDER, POLITICO) and classifieds portals (StepStone Group and AVIV Group) Axel Springer SE empowers people to make free decisions for their lives. Today, the transformation from a traditional print media company to Europe’s leading digital publisher has been successfully accomplished. The next goal has been identified: Axel Springer wants to become global market leader in digital content and digital classifieds through accelerated growth. The company is headquartered in Berlin and employs more than 18,000 people worldwide.

 

Axel Springer는 40개국 이상에서 활동하는 미디어 및 기술 회사입니다. 다양한 미디어 브랜드(BILD, WELT, INSIDER, POLITICO 등) 및 광고 포털(StepStone Group 및 AVIV Group) 전반에 걸쳐 정보를 제공함으로써 Axel Springer SE는 사람들이 자신의 삶에 대해 자유로운 결정을 내릴 수 있도록 지원합니다. 오늘날 전통적인 인쇄 매체 회사에서 유럽 최고의 디지털 출판사로의 전환이 성공적으로 이루어졌습니다. 다음 목표가 확인되었습니다. Axel Springer는 가속화된 성장을 통해 디지털 콘텐츠 및 디지털 광고 분야의 글로벌 시장 리더가 되고자 합니다. 이 회사는 베를린에 본사를 두고 있으며 전 세계적으로 18,000명 이상의 직원을 고용하고 있습니다.

 

 

 

https://www.axelspringer.com/en/ax-press-release/axel-springer-and-openai-partner-to-deepen-beneficial-use-of-ai-in-journalism

 

Axel Springer and OpenAI partner to deepen beneficial use of AI in journalism

Axel Springer and OpenAI have announced a global partnership to strengthen independent journalism in the age of artificial intelligence (AI). The initiative will enrich users’ experience with ChatGPT by adding recent and authoritative content on a wide v

www.axelspringer.com

 

 

Axel Springer and OpenAI have announced a global partnership to strengthen independent journalism in the age of artificial intelligence (AI). The initiative will enrich users’ experience with ChatGPT by adding recent and authoritative content on a wide variety of topics, and explicitly values the publisher’s role in contributing to OpenAI’s products. This marks a significant step in both companies’ commitment to leverage AI for enhancing content experiences and creating new financial opportunities that support a sustainable future for journalism.  

 

Axel Springer와 OpenAI가 인공지능(AI) 시대에 독립적인 저널리즘을 강화하기 위한 글로벌 파트너십을 발표했습니다. 이 이니셔티브는 다양한 주제에 대한 권위 있는 최신 콘텐츠를 추가하여 ChatGPT에 대한 사용자 경험을 풍부하게 하고 OpenAI 제품에 기여하는 게시자의 역할을 명시적으로 높이 평가합니다. 이는 콘텐츠 경험을 향상하고 저널리즘의 지속 가능한 미래를 지원하는 새로운 재정적 기회를 창출하기 위해 AI를 활용하려는 두 회사의 약속에서 중요한 단계입니다.

 

With this partnership, ChatGPT users around the world will receive summaries of selected global news content from Axel Springer’s media brands including POLITICO, BUSINESS INSIDER, and European properties BILD and WELT, including otherwise paid content. ChatGPT’s answers to user queries will include attribution and links to the full articles for transparency and further information.

 

이 파트너십을 통해 전 세계 ChatGPT 사용자는 유료 콘텐츠를 포함하여 POLITICO, BUSINESS INSIDER, 유럽 자산 BILD 및 WELT를 포함한 Axel Springer의 미디어 브랜드에서 선택된 글로벌 뉴스 콘텐츠의 요약을 받게 됩니다. 사용자 쿼리에 대한 ChatGPT의 답변에는 투명성과 추가 정보를 위해 전체 기사에 대한 출처 및 링크가 포함됩니다.

 

In addition, the partnership supports Axel Springer’s existing AI-driven ventures that build upon OpenAI’s technology. The collaboration also involves the use of quality content from Axel Springer media brands for advancing the training of OpenAI’s sophisticated large language models.

 

또한 이번 파트너십은 OpenAI 기술을 기반으로 구축된 Axel Springer의 기존 AI 기반 벤처를 지원합니다. 또한 이번 협력에는 OpenAI의 정교한 대규모 언어 모델 교육을 발전시키기 위해 Axel Springer 미디어 브랜드의 고품질 콘텐츠를 사용하는 것도 포함됩니다.

 

Mathias Döpfner, CEO of Axel Springer: “We are excited to have shaped this global partnership between Axel Springer and OpenAI – the first of its kind. We want to explore the opportunities of AI empowered journalism – to bring quality, societal relevance and the business model of journalism to the next level.”

 

Axel Springer의 CEO인 Mathias Döpfner는 다음과 같이 말했습니다. “Axel Springer와 OpenAI 간의 최초의 글로벌 파트너십을 구축하게 되어 기쁘게 생각합니다. 우리는 AI 기반 저널리즘의 기회를 탐색하여 저널리즘의 품질, 사회적 관련성 및 비즈니스 모델을 한 단계 끌어올리고 싶습니다.”

 

Brad Lightcap, COO of OpenAI: “This partnership with Axel Springer will help provide people with new ways to access quality, real-time news content through our AI tools. We are deeply committed to working with publishers and creators around the world and ensuring they benefit from advanced AI technology and new revenue models.”

 

OpenAI의 COO인 Brad Lightcap은 다음과 같이 말했습니다. “Axel Springer와의 이번 파트너십은 사람들이 AI 도구를 통해 고품질의 실시간 뉴스 콘텐츠에 액세스할 수 있는 새로운 방법을 제공하는 데 도움이 될 것입니다. 우리는 전 세계 출판사 및 창작자와 협력하여 이들이 첨단 AI 기술과 새로운 수익 모델의 혜택을 누릴 수 있도록 최선을 다하고 있습니다.”

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

반응형


반응형

https://openai.com/blog/sam-altman-returns-as-ceo-openai-has-a-new-initial-board

 

Sam Altman returns as CEO, OpenAI has a new initial board

Mira Murati as CTO, Greg Brockman returns as President. Read messages from CEO Sam Altman and board chair Bret Taylor.

openai.com

 

 

 

Sam Altman returns as CEO, OpenAI has a new initial board

 

Mira Murati as CTO, Greg Brockman returns as President. Read messages from CEO Sam Altman and board chair Bret Taylor.

 

Below are messages CEO Sam Altman and board chair Bret Taylor shared with the company this afternoon.

 

다음은 오늘 오후 CEO인 Sam Altman과 이사회 의장인 Bret Taylor가 회사와 공유한 메시지입니다.

 

Message from Sam to the company

Sam이 회사에 보내는 메시지

 

I am returning to OpenAI as CEO. Mira will return to her role as CTO. The new initial board will consist of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.

 

저는 OpenAI의 CEO로 복귀합니다. Mira는 CTO 역할로 복귀합니다. 새로운 초기 이사회는 Bret Taylor(의장), Larry Summers 및 Adam D'Angelo로 구성됩니다.

 

I have never been more excited about the future. I am extremely grateful for everyone’s hard work in an unclear and unprecedented situation, and I believe our resilience and spirit set us apart in the industry. I feel so, so good about our probability of success for achieving our mission.

 

나는 미래에 대해 이보다 더 흥분된 적이 없습니다. 불분명하고 전례 없는 상황 속에서도 모두의 노고에 진심으로 감사드리며, 우리의 회복력과 정신이 업계에서 우리를 돋보이게 한다고 믿습니다. 우리의 임무 달성에 대한 성공 가능성에 대해 매우 기분이 좋습니다.

 

Before getting to what comes next, I’d like to share some thanks.

 

다음 단계를 시작하기 전에 감사의 인사를 전하고 싶습니다.

 

I love and respect Ilya, I think he's a guiding light of the field and a gem of a human being. I harbor zero ill will towards him. While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.

 

나는 일리아를 사랑하고 존경합니다. 나는 그가 현장의 빛이자 인간의 보석이라고 생각합니다. 나는 그 사람에 대해 악의가 전혀 없습니다. Ilya는 더 이상 이사회에서 일하지 않지만, 우리는 협력 관계를 계속 유지하기를 희망하며 그가 OpenAI에서 어떻게 업무를 계속할 수 있는지 논의하고 있습니다.

 

I am grateful to Adam, Tasha, and Helen for working with us to come to this solution that best serves the mission. I’m excited to continue to work with Adam and am sincerely thankful to Helen and Tasha for investing a huge amount of effort in this process.

 

사명에 가장 적합한 솔루션을 찾기 위해 우리와 협력한 Adam, Tasha, Helen에게 감사드립니다. 저는 Adam과 계속해서 협력할 수 있게 되어 기쁘게 생각하며 이 과정에 엄청난 노력을 투자한 Helen과 Tasha에게 진심으로 감사드립니다.

 

Thank you also to Emmett who had a key and constructive role in helping us reach this outcome. Emmett’s dedication to AI safety and balancing stakeholders’ interests was clear.

 

우리가 이 결과를 달성하는 데 핵심적이고 건설적인 역할을 한 Emmett에게도 감사드립니다. AI 안전과 이해관계자의 이익 균형에 대한 Emmett의 헌신은 분명했습니다.

 

Mira did an amazing job throughout all of this, serving the mission, the team, and the company selflessly throughout. She is an incredible leader and OpenAI would not be OpenAI without her. Thank you.

 

Mira는 이 모든 과정에서 놀라운 일을 해냈고, 사심 없이 사명과 팀, 회사에 봉사했습니다. 그녀는 놀라운 리더이며 OpenAI는 그녀 없이는 OpenAI가 될 수 없습니다. 감사합니다.

 

Greg and I are partners in running this company. We have never quite figured out how to communicate that on the org chart, but we will. In the meantime, I just wanted to make it clear. Thank you for everything you have done since the very beginning, and for how you handled things from the moment this started and over the last week.

 

Greg와 나는 이 회사를 운영하는 파트너입니다. 우리는 조직도에서 이를 어떻게 전달해야 할지 아직 생각해 본 적이 없지만 그렇게 할 것입니다. 그 동안 나는 단지 분명히하고 싶었습니다. 처음부터 해주신 모든 일에 감사드리며, 이 일이 시작된 순간부터 지난 주까지 일을 처리하는 방법에 대해 감사드립니다.

 

The leadership team–Mira, Brad, Jason, Che, Hannah, Diane, Anna, Bob, Srinivas, Matt, Lilian, Miles, Jan, Wojciech, John, Jonathan, Pat, and many more–is clearly ready to run the company without me. They say one way to evaluate a CEO is how you pick and train your potential successors; on that metric I am doing far better than I realized. It’s clear to me that the company is in great hands, and I hope this is abundantly clear to everyone. Thank you all.

 

리더십  팀(Mira, Brad, Jason, Che, Hannah, Diane, Anna, Bob, Srinivas, Matt, Lilian, Miles, Jan, Wojciech, John, Jonathan, Pat 등)은 회사를 운영할 준비가 확실히 되어 있습니다. 나. CEO를 평가하는 한 가지 방법은 잠재적인 후임자를 선택하고 교육하는 방법이라고 합니다. 그 지표에서 나는 내가 깨달은 것보다 훨씬 더 잘하고 있습니다. 회사가 큰 손에 있다는 것은 나에게 분명하며, 이 사실이 모든 사람에게 충분히 명확해지기를 바랍니다. 다들 감사 해요.

 

Jakub, Szymon, and Aleksander are exceptional talents and I’m so happy they have rejoined to move us and our research forward. Thank you.

 

Jakub, Szymon 및 Aleksander는 뛰어난 재능을 갖고 있으며 그들이 다시 합류하여 우리와 우리의 연구를 발전시키게 되어 매우 기쁩니다. 감사합니다.

 

To all of you, our team: I am sure books are going to be written about this time period, and I hope the first thing they say is how amazing the entire team has been. Now that we’re through all of this, we didn’t lose a single employee. You stood firm for each other, this company, and our mission. One of the most important things for the team that builds AGI safely is the ability to handle stressful and uncertain situations, and maintain good judgment throughout. Top marks. Thank you all.

 

우리 팀 여러분께: 이 시기에 관한 책들이 쓰일 것이라고 확신합니다. 그리고 그들이 가장 먼저 말하게 될 것은 전체 팀이 얼마나 훌륭했는지였습니다. 이제 이 모든 일을 겪으면서 우리는 단 한 명의 직원도 잃지 않았습니다. 여러분은 서로, 이 회사, 그리고 우리의 사명을 굳건히 지지했습니다. AGI를 안전하게 구축하는 팀에게 가장 중요한 것 중 하나는 스트레스가 많고 불확실한 상황을 처리하고 전반적으로 올바른 판단을 유지하는 능력입니다. 좋은 점수. 다들 감사 해요.

 

Satya, Kevin, Amy, and Brad have been incredible partners throughout this, with exactly the right priorities all the way through. They’ve had our backs and were ready to welcome all of us if we couldn’t achieve our primary goal. We clearly made the right choice to partner with Microsoft and I’m excited that our new board will include them as a non-voting observer. Thank you.

 

Satya, Kevin, Amy 및 Brad는 이 과정 전반에 걸쳐 정확한 우선순위를 가지고 놀라운 파트너였습니다. 그들은 우리를 지지해 주었고 우리가 주요 목표를 달성하지 못할 경우 우리 모두를 환영할 준비가 되어 있었습니다. 우리는 Microsoft와 협력하기로 한 올바른 선택을 했으며 새 이사회에서 Microsoft를 투표권 없는 참관인으로 포함하게 되어 기쁩니다. 감사합니다.

 

To our partners and users, thank you for sticking with us. We really felt the outpouring of support and love, and it helped all of us get through this. The fact that we did not lose a single customer will drive us to work even harder for you, and we are all excited to get back to work.

 

파트너와 사용자 여러분, 우리와 함께해주셔서 감사합니다. 우리는 정말 많은 지원과 사랑을 느꼈고, 이는 우리 모두가 이 상황을 극복하는 데 도움이 되었습니다. 우리가 단 한 명의 고객도 잃지 않았다는 사실은 우리가 귀하를 위해 더욱 열심히 일하도록 이끌 것이며, 우리 모두는 다시 일을 시작하게 되어 기쁩니다.

 

Will Hurd, Brian Chesky, Bret Taylor and Larry Summers put their lives on hold and did an incredible amount to support the mission. I don’t know how they did it so well, but they really did. Thank you.

 

윌 허드(Will Hurd), 브라이언 체스키(Brian Chesky), 브렛 테일러(Bret Taylor), 래리 서머스(Larry Summers)는 목숨을 걸고 임무를 지원하기 위해 엄청난 금액을 기부했습니다. 그들이 어떻게 그렇게 잘했는지는 모르겠지만 정말 그랬습니다. 감사합니다.

 

Ollie also put his life on hold this entire time to just do everything he could to help out, in addition to providing his usual unconditional love and support. Thank you and I love you.

 

Ollie는 평소의 무조건적인 사랑과 지원을 제공하는 것 외에도 도움을 주기 위해 할 수 있는 모든 일을 하기 위해 이번 내내 자신의 삶을 보류했습니다. 감사하고 사랑합니다.

 

 

So what’s next?

 

We have three immediate priorities.

 

우리에게는 세 가지 우선순위가 있습니다.

 

Advancing our research plan and further investing in our full-stack safety efforts, which have always been critical to our work. Our research roadmap is clear; this was a wonderfully focusing time. I share the excitement you all feel; we will turn this crisis into an opportunity! I’ll work with Mira on this.

 

연구 계획을 발전시키고 항상 우리 업무에 중요한 전체 스택 안전 노력에 추가로 투자합니다. 우리의 연구 로드맵은 명확합니다. 정말 집중할 수 있는 시간이었습니다. 저는 여러분 모두가 느끼는 흥분을 공유합니다. 이 위기를 기회로 바꾸겠습니다! 이 문제에 대해서는 Mira와 협력하겠습니다.

 

Continuing to improve and deploy our products and serve our customers. It’s important that people get to experience the benefits and promise of AI, and have the opportunity to shape it. We continue to believe that great products are the best way to do this. I’ll work with Brad, Jason and Anna to ensure our unwavering commitment to users, customers, partners and governments around the world is clear.

 

지속적으로 제품을 개선하고 배포하며 고객에게 서비스를 제공합니다. 사람들이 AI의 이점과 가능성을 경험하고 AI를 형성할 기회를 갖는 것이 중요합니다. 우리는 훌륭한 제품이 이를 수행하는 가장 좋은 방법이라고 계속 믿습니다. 저는 Brad, Jason, Anna와 협력하여 전 세계 사용자, 고객, 파트너 및 정부에 대한 우리의 변함없는 약속을 확실히 할 것입니다.

 

Bret, Larry, and Adam will be working very hard on the extremely important task of building out a board of diverse perspectives, improving our governance structure and overseeing an independent review of recent events. I look forward to working closely with them on these crucial steps so everyone can be confident in the stability of OpenAI. 

 

Bret, Larry 및 Adam은 다양한 관점의 이사회를 구축하고 거버넌스 구조를 개선하며 최근 사건에 대한 독립적인 검토를 감독하는 매우 중요한 작업을 수행하기 위해 열심히 노력할 것입니다. 모든 사람이 OpenAI의 안정성에 확신을 가질 수 있도록 이러한 중요한 단계에서 그들과 긴밀히 협력할 수 있기를 기대합니다.

 

I am so looking forward to finishing the job of building beneficial AGI with you all—best team in the world, best mission in the world.

 

세계 최고의 팀, 세계 최고의 미션인 유익한 AGI 구축 작업을 여러분과 함께 마무리할 수 있기를 기대합니다.

 

Love,Sam

 

Message from Bret to the company

 

On behalf of the OpenAI Board, I want to express our gratitude to the entire OpenAI community, especially all the OpenAI employees, who came together to help find a path forward for the company over the past week. Your efforts helped enable this incredible organization to continue to serve its mission to ensure that artificial general intelligence benefits all of humanity. We are thrilled that Sam, Mira and Greg are back together leading the company and driving it forward. We look forward to working with them and all of you. 

 

OpenAI 이사회를 대신하여 저는 전체 OpenAI 커뮤니티, 특히 지난 주 동안 회사가 나아갈 길을 찾기 위해 함께 모인 모든 OpenAI 직원들에게 감사의 말씀을 전하고 싶습니다. 귀하의 노력은 이 놀라운 조직이 인공 일반 지능이 모든 인류에게 이익이 되도록 보장하는 임무를 계속 수행할 수 있도록 도왔습니다. Sam, Mira, Greg가 다시 함께 회사를 이끌고 발전시켜 나가게 되어 매우 기쁩니다. 우리는 그들과 여러분 모두와 함께 일할 수 있기를 기대합니다.

 

As a Board, we are focused on strengthening OpenAI’s corporate governance. Here’s how we plan to do it:

 

이사회로서 우리는 OpenAI의 기업 지배구조를 강화하는 데 중점을 두고 있습니다. 우리가 계획하는 방법은 다음과 같습니다.

 

  • We will build a qualified, diverse Board of exceptional individuals whose collective experience represents the breadth of OpenAI’s mission – from technology to safety to policy. We are pleased that this Board will include a non-voting observer for Microsoft.
  • 우리는 기술에서 안전, 정책에 이르기까지 OpenAI의 사명을 폭넓게 대표하는 집단 경험을 갖춘 뛰어난 개인들로 구성된 자격을 갖춘 다양한 이사회를 구성할 것입니다. 이 이사회에 투표권이 없는 Microsoft 참관인이 포함되어 기쁘게 생각합니다.
  • We will further stabilize the OpenAI organization so that we can continue to serve our mission.  This will include convening an independent committee of the Board to oversee a review of the recent events.
  • 우리는 계속해서 사명을 수행할 수 있도록 OpenAI 조직을 더욱 안정화할 것입니다. 여기에는 최근 사건에 대한 검토를 감독하기 위해 이사회의 독립 위원회를 소집하는 것이 포함됩니다.
  • We will enhance the governance structure of OpenAI so that all stakeholders – users, customers, employees, partners, and community members – can trust that OpenAI will continue to thrive.
  • 사용자, 고객, 직원, 파트너, 커뮤니티 구성원 등 모든 이해관계자가 OpenAI가 계속해서 성장할 것이라고 신뢰할 수 있도록 OpenAI의 거버넌스 구조를 강화할 것입니다.

OpenAI is a more important institution than ever before. ChatGPT has made artificial intelligence a part of daily life for hundreds of millions of people. Its popularity has made AI – its benefits and its risks – central to virtually every conversation about the future of governments, business, and society.

 

OpenAI는 그 어느 때보다 중요한 기관입니다. ChatGPT는 인공지능을 수억 명의 일상생활의 일부로 만들었습니다. AI의 인기로 인해 AI(AI의 이점과 위험)는 정부, 기업 및 사회의 미래에 관한 거의 모든 대화의 중심이 되었습니다.

 

We understand the gravity of these discussions and the central role of OpenAI in the development and safety of these awe-inspiring new technologies. Each of you plays a critical part in ensuring that we effectively meet these challenges.  We are committed to listening and learning from you, and I hope to speak with you all very soon.

 

우리는 이러한 논의의 중요성과 경외심을 불러일으키는 신기술의 개발 및 안전에 있어 OpenAI의 중심 역할을 이해하고 있습니다. 여러분 각자는 우리가 이러한 과제를 효과적으로 해결하는 데 중요한 역할을 합니다. 우리는 여러분의 의견을 듣고 배우기 위해 최선을 다하고 있으며 곧 여러분과 이야기를 나눌 수 있기를 바랍니다.

 

We are grateful to be a part of OpenAI, and excited to work with all of you.

 

OpenAI의 일원이 된 것을 감사하게 생각하며 여러분 모두와 함께 일하게 되어 기쁩니다.

 

Thank you,

Bret Taylor

Chair, OpenAI

 

 

 

 

 

 

 

 

 

 

 

 

 

 

반응형


반응형

https://openai.com/blog/openai-announces-leadership-transition

 

OpenAI announces leadership transition

Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company. Search process underway to identify permanent successor. The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing bod

openai.com

 

 

Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company.

Search process underway to identify permanent successor.

 

최고 기술 책임자인 Mira Murati는 OpenAI를 이끌 임시 CEO로 임명되었습니다. 영구 후임자를 찾기 위한 검색 작업이 진행 중입니다. 샘 알트먼(Sam Altman)이 회사를 떠납니다. 

 

The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.

 

모든 OpenAI 활동의 전반적인 관리 기관 역할을 하는 OpenAI, Inc.(501(c)(3))의 이사회는 오늘 Sam Altman이 CEO직을 떠나 이사회를 떠날 것이라고 발표했습니다. 회사의 최고 기술 책임자(CTO)인 미라 무라티(Mira Murati)가 임시 CEO로 즉각 취임할 예정입니.

 

 

A member of OpenAI’s leadership team for five years, Mira has played a critical role in OpenAI’s evolution into a global AI leader. She brings a unique skill set, understanding of the company’s values, operations, and business, and already leads the company’s research, product, and safety functions. Given her long tenure and close engagement with all aspects of the company, including her experience in AI governance and policy, the board believes she is uniquely qualified for the role and anticipates a seamless transition while it conducts a formal search for a permanent CEO.

 

5년 동안 OpenAI 리더십 팀의 일원이었던 Mira는 OpenAI가 글로벌 AI 리더로 발전하는 데 중요한 역할을 했습니다. 그녀는 독특한 기술과 회사의 가치, 운영 및 비즈니스에 대한 이해를 바탕으로 회사의 연구, 제품 및 안전 기능을 이끌고 있습니다. 그녀의 오랜 임기와 AI 거버넌스 및 정책 경험을 포함하여 회사의 모든 측면에 대한 긴밀한 참여를 고려할 때 이사회는 그녀가 해당 역할에 대한 고유한 자격을 갖추고 있다고 믿고 영구 CEO를 공식적으로 찾는 동안 원활한 전환을 기대합니다.

 

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

 

알트먼 씨의 사임은 이사회의 심의 검토 과정에 따른 것이며, 이사회는 그가 이사회와의 의사소통에 일관되게 솔직하지 않아 이사회의 책임 수행 능력을 방해한다는 결론을 내렸습니다. 이사회는 더 이상 OpenAI를 계속 이끌 수 있는 그의 능력에 대해 확신을 갖지 못합니다.

 

 

In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”

 

성명서에서 이사회는 “OpenAI는 인공 일반 지능이 모든 인류에게 이익이 되도록 보장한다는 우리의 사명을 발전시키기 위해 의도적으로 구성되었습니다. 이사회는 이 사명을 수행하기 위해 최선을 다하고 있습니다. OpenAI의 창립과 성장에 대한 Sam의 많은 기여에 감사드립니다. 동시에 우리는 앞으로 나아가기 위해서는 새로운 리더십이 필요하다고 믿습니다. 회사의 연구, 제품 및 안전 기능의 리더인 Mira는 임시 CEO 역할을 맡을 수 있는 특별한 자격을 갖추고 있습니다. 우리는 이 전환 기간 동안 OpenAI를 이끌 그녀의 능력에 대해 최고의 확신을 갖고 있습니다.”

 

OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.

 

OpenAI의 이사회는 OpenAI 수석 과학자 Ilya Sutskever, 사외 이사인 Quora CEO Adam D'Angelo, 기술 기업가 Tasha McCauley, Georgetown Center for Security and Emerging Technology의 Helen Toner로 구성되어 있습니다.

 

As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.

 

이러한 전환의 일환으로 Greg Brockman은 이사회 의장직에서 물러나며 회사에서 CEO에게 보고하는 역할을 맡게 됩니다.

 

OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.

 

OpenAI는 인공 일반 지능이 모든 인류에게 혜택을 제공한다는 핵심 사명을 가지고 2015년 비영리 단체로 설립되었습니다. 2019년에 OpenAI는 비영리 단체의 사명, 거버넌스 및 감독을 유지하면서 회사가 이 사명을 추구하기 위해 자본을 조달할 수 있도록 구조 조정했습니다. 이사회의 대다수는 독립적이며, 독립 이사는 OpenAI에서 지분을 보유하지 않습니다. 회사는 극적인 성장을 경험했지만 OpenAI의 사명을 발전시키고 헌장의 원칙을 보존하는 것은 이사회의 근본적인 거버넌스 책임으로 남아 있습니다.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

반응형

Nov. 9, 2023 - OpenAI Data Partnerships

2023. 11. 22. 11:51 | Posted by 솔웅


반응형

https://openai.com/blog/data-partnerships

 

OpenAI Data Partnerships

Working together to create open-source and private datasets for AI training.

openai.com

 

OpenAI Data Partnerships

 

Working together to create open-source and private datasets for AI training.

 

 

November 9, 2023

 

We are introducing OpenAI Data Partnerships, where we’ll work together with organizations to produce public and private datasets for training AI models.

 

우리는 조직과 협력하여 AI 모델 교육을 위한 공개 및 비공개 데이터 세트를 생성하는 OpenAI 데이터 파트너십을 도입합니다.

 

Modern AI technology learns skills and aspects of our world — of people, our motivations, interactions, and the way we communicate — by making sense of the data on which it’s trained. To ultimately make AGI that is safe and beneficial to all of humanity, we’d like AI models to deeply understand all subject matters, industries, cultures, and languages, which requires as broad a training dataset as possible. 

 

현대 AI 기술은 훈련된 데이터를 이해함으로써 사람, 동기, 상호 작용, 의사소통 방식 등 세상의 기술과 측면을 학습합니다. 궁극적으로 인류 모두에게 안전하고 유익한 AGI를 만들기 위해 우리는 AI 모델이 모든 주제, 산업, 문화 및 언어를 깊이 이해하기를 원하며, 이를 위해서는 가능한 한 광범위한 교육 데이터 세트가 필요합니다.

 

Including your content can make AI models more helpful to you by increasing their understanding of your domain. We’re already working with many partners who are eager to represent data from their country or industry. For example, we recently partnered with the Icelandic Government and Miðeind ehf to improve GPT-4’s ability to speak Icelandic by integrating their curated datasets. We also partnered with non-profit organization Free Law Project, which aims to democratize access to legal understanding by including their large collection of legal documents in AI training. We know there may be many more who also want to contribute to the future of AI research while discovering the potential of their unique data.

 

콘텐츠를 포함하면 도메인에 대한 이해도를 높여 AI 모델이 더욱 유용해질 수 있습니다. 우리는 이미 해당 국가나 업계의 데이터를 대표하고자 하는 많은 파트너와 협력하고 있습니다. 예를 들어, 우리는 최근 아이슬란드 정부 및 Miðeind ehf와 협력하여 선별된 데이터 세트를 통합하여 GPT-4의 아이슬란드어 말하기 능력을 향상시켰습니다. 우리는 또한 AI 교육에 대규모 법률 문서 컬렉션을 포함시켜 법적 이해에 대한 접근을 민주화하는 것을 목표로 하는 비영리 단체인 Free Law Project와 파트너십을 맺었습니다. 우리는 고유한 데이터의 잠재력을 발견하면서 AI 연구의 미래에 기여하고 싶어하는 사람들이 더 많이 있을 수 있다는 것을 알고 있습니다.

 

Data Partnerships are intended to enable more organizations to help steer the future of AI and benefit from models that are more useful to them, by including content they care about.

 

데이터 파트너십은 더 많은 조직이 관심 있는 콘텐츠를 포함하여 AI의 미래를 주도하고 더 유용한 모델의 혜택을 누릴 수 있도록 하기 위한 것입니다.

 

The kinds of data we’re seeking

We’re interested in large-scale datasets that reflect human society and that are not already easily accessible online to the public today. We can work with any modality, including text, images, audio, or video. We’re particularly looking for data that expresses human intention (e.g. long-form writing or conversations rather than disconnected snippets), across any language, topic, and format. 

 

우리는 인간 사회를 반영하고 오늘날 대중이 온라인으로 쉽게 접근할 수 없는 대규모 데이터 세트에 관심이 있습니다. 텍스트, 이미지, 오디오, 비디오 등 모든 형식으로 작업할 수 있습니다. 우리는 특히 모든 언어, 주제, 형식에 걸쳐 인간의 의도를 표현하는 데이터(예: 단절된 단편이 아닌 긴 형식의 글쓰기 또는 대화)를 찾고 있습니다.

 

We can work with data in almost any form and can use our next-generation in-house AI technology to help you digitize and structure your data. For example, we have world-class optical character recognition (OCR) technology to digitize files like PDFs, and automatic speech recognition (ASR) to transcribe spoken words. If the data needs cleaning (e.g. has lots of auto-generated artifacts or transcription errors), we can work with your team to process it into the most useful form. We are not seeking datasets with sensitive or personal information, or information that belongs to a third party; we can work with you to remove this information if you need help.

 

우리는 거의 모든 형태의 데이터로 작업할 수 있으며 차세대 사내 AI 기술을 사용하여 데이터를 디지털화하고 구조화할 수 있습니다. 예를 들어, 우리는 PDF와 같은 파일을 디지털화하는 세계 최고 수준의 광학 문자 인식(OCR) 기술과 음성을 텍스트로 변환하는 자동 음성 인식(ASR) 기술을 보유하고 있습니다. 데이터를 정리해야 하는 경우(예: 자동 생성된 아티팩트 또는 전사 오류가 많은 경우) 팀과 협력하여 가장 유용한 형식으로 처리할 수 있습니다. 우리는 민감한 개인정보 또는 제3자 소유의 정보가 포함된 데이터 세트를 찾고 있지 않습니다. 도움이 필요한 경우 당사는 귀하와 협력하여 이 정보를 제거할 수 있습니다.

 

 

Ways to partner with us

We currently have two ways to partner, and may expand in the future:

 

현재 파트너 관계를 맺는 방법에는 두 가지가 있으며 향후 확장될 수 있습니다.

  • Open-Source Archive: We’re seeking partners to help us create an open-source dataset for training language models. This dataset would be public for anyone to use in AI model training. We would also explore using it to safely train additional open-source models ourselves. We believe open-source plays an important role in the ecosystem.
  • 오픈 소스 아카이브: 우리는 언어 모델 훈련을 위한 오픈 소스 데이터 세트를 만드는 데 도움을 줄 파트너를 찾고 있습니다. 이 데이터 세트는 누구나 AI 모델 교육에 사용할 수 있도록 공개됩니다. 또한 이를 사용하여 추가적인 오픈 소스 모델을 직접 안전하게 교육하는 방법도 모색할 것입니다. 우리는 오픈 소스가 생태계에서 중요한 역할을 한다고 믿습니다.
  • Private Datasets: We are also preparing private datasets for training proprietary AI models, including our foundation models and fine-tuned and custom models. If you have data you wish to keep private, but you would like our AI models to have a better understanding of your domain (or you’d even just like to gauge the potential of your data to do so), this is the optimal way to partner. We’ll treat your data with the level of sensitivity and access controls that you prefer. 
  • 비공개 데이터세트: 우리는 또한 기초 모델과 미세 조정 및 사용자 정의 모델을 포함하여 독점 AI 모델을 교육하기 위한 비공개 데이터세트를 준비하고 있습니다. 비공개로 유지하고 싶은 데이터가 있지만 AI 모델이 도메인을 더 잘 이해하기를 원하는 경우(또는 그렇게 할 수 있는 데이터의 잠재력을 측정하고 싶은 경우) 이것이 최적의 방법입니다. 파트너에게. 우리는 귀하가 선호하는 민감도 및 액세스 제어 수준으로 귀하의 데이터를 처리합니다.

Overall, we are seeking partners who want to help us teach AI to understand our world in order to be maximally helpful to everyone. Together, we can move towards AGI that benefits all of humanity.

 

전반적으로 우리는 모든 사람에게 최대한 도움이 될 수 있도록 AI가 세상을 이해하도록 가르치는 데 도움을 주고 싶은 파트너를 찾고 있습니다. 우리는 함께 인류 모두에게 이익이 되는 AGI를 향해 나아갈 수 있습니다.

 

 

 

 

 

 

반응형