블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함





Sam Altman returns as CEO, OpenAI has a new initial board

Mira Murati as CTO, Greg Brockman returns as President. Read messages from CEO Sam Altman and board chair Bret Taylor.





Sam Altman returns as CEO, OpenAI has a new initial board


Mira Murati as CTO, Greg Brockman returns as President. Read messages from CEO Sam Altman and board chair Bret Taylor.


Below are messages CEO Sam Altman and board chair Bret Taylor shared with the company this afternoon.


다음은 오늘 오후 CEO인 Sam Altman과 이사회 의장인 Bret Taylor가 회사와 공유한 메시지입니다.


Message from Sam to the company

Sam이 회사에 보내는 메시지


I am returning to OpenAI as CEO. Mira will return to her role as CTO. The new initial board will consist of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.


저는 OpenAI의 CEO로 복귀합니다. Mira는 CTO 역할로 복귀합니다. 새로운 초기 이사회는 Bret Taylor(의장), Larry Summers 및 Adam D'Angelo로 구성됩니다.


I have never been more excited about the future. I am extremely grateful for everyone’s hard work in an unclear and unprecedented situation, and I believe our resilience and spirit set us apart in the industry. I feel so, so good about our probability of success for achieving our mission.


나는 미래에 대해 이보다 더 흥분된 적이 없습니다. 불분명하고 전례 없는 상황 속에서도 모두의 노고에 진심으로 감사드리며, 우리의 회복력과 정신이 업계에서 우리를 돋보이게 한다고 믿습니다. 우리의 임무 달성에 대한 성공 가능성에 대해 매우 기분이 좋습니다.


Before getting to what comes next, I’d like to share some thanks.


다음 단계를 시작하기 전에 감사의 인사를 전하고 싶습니다.


I love and respect Ilya, I think he's a guiding light of the field and a gem of a human being. I harbor zero ill will towards him. While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.


나는 일리아를 사랑하고 존경합니다. 나는 그가 현장의 빛이자 인간의 보석이라고 생각합니다. 나는 그 사람에 대해 악의가 전혀 없습니다. Ilya는 더 이상 이사회에서 일하지 않지만, 우리는 협력 관계를 계속 유지하기를 희망하며 그가 OpenAI에서 어떻게 업무를 계속할 수 있는지 논의하고 있습니다.


I am grateful to Adam, Tasha, and Helen for working with us to come to this solution that best serves the mission. I’m excited to continue to work with Adam and am sincerely thankful to Helen and Tasha for investing a huge amount of effort in this process.


사명에 가장 적합한 솔루션을 찾기 위해 우리와 협력한 Adam, Tasha, Helen에게 감사드립니다. 저는 Adam과 계속해서 협력할 수 있게 되어 기쁘게 생각하며 이 과정에 엄청난 노력을 투자한 Helen과 Tasha에게 진심으로 감사드립니다.


Thank you also to Emmett who had a key and constructive role in helping us reach this outcome. Emmett’s dedication to AI safety and balancing stakeholders’ interests was clear.


우리가 이 결과를 달성하는 데 핵심적이고 건설적인 역할을 한 Emmett에게도 감사드립니다. AI 안전과 이해관계자의 이익 균형에 대한 Emmett의 헌신은 분명했습니다.


Mira did an amazing job throughout all of this, serving the mission, the team, and the company selflessly throughout. She is an incredible leader and OpenAI would not be OpenAI without her. Thank you.


Mira는 이 모든 과정에서 놀라운 일을 해냈고, 사심 없이 사명과 팀, 회사에 봉사했습니다. 그녀는 놀라운 리더이며 OpenAI는 그녀 없이는 OpenAI가 될 수 없습니다. 감사합니다.


Greg and I are partners in running this company. We have never quite figured out how to communicate that on the org chart, but we will. In the meantime, I just wanted to make it clear. Thank you for everything you have done since the very beginning, and for how you handled things from the moment this started and over the last week.


Greg와 나는 이 회사를 운영하는 파트너입니다. 우리는 조직도에서 이를 어떻게 전달해야 할지 아직 생각해 본 적이 없지만 그렇게 할 것입니다. 그 동안 나는 단지 분명히하고 싶었습니다. 처음부터 해주신 모든 일에 감사드리며, 이 일이 시작된 순간부터 지난 주까지 일을 처리하는 방법에 대해 감사드립니다.


The leadership team–Mira, Brad, Jason, Che, Hannah, Diane, Anna, Bob, Srinivas, Matt, Lilian, Miles, Jan, Wojciech, John, Jonathan, Pat, and many more–is clearly ready to run the company without me. They say one way to evaluate a CEO is how you pick and train your potential successors; on that metric I am doing far better than I realized. It’s clear to me that the company is in great hands, and I hope this is abundantly clear to everyone. Thank you all.


리더십  팀(Mira, Brad, Jason, Che, Hannah, Diane, Anna, Bob, Srinivas, Matt, Lilian, Miles, Jan, Wojciech, John, Jonathan, Pat 등)은 회사를 운영할 준비가 확실히 되어 있습니다. 나. CEO를 평가하는 한 가지 방법은 잠재적인 후임자를 선택하고 교육하는 방법이라고 합니다. 그 지표에서 나는 내가 깨달은 것보다 훨씬 더 잘하고 있습니다. 회사가 큰 손에 있다는 것은 나에게 분명하며, 이 사실이 모든 사람에게 충분히 명확해지기를 바랍니다. 다들 감사 해요.


Jakub, Szymon, and Aleksander are exceptional talents and I’m so happy they have rejoined to move us and our research forward. Thank you.


Jakub, Szymon 및 Aleksander는 뛰어난 재능을 갖고 있으며 그들이 다시 합류하여 우리와 우리의 연구를 발전시키게 되어 매우 기쁩니다. 감사합니다.


To all of you, our team: I am sure books are going to be written about this time period, and I hope the first thing they say is how amazing the entire team has been. Now that we’re through all of this, we didn’t lose a single employee. You stood firm for each other, this company, and our mission. One of the most important things for the team that builds AGI safely is the ability to handle stressful and uncertain situations, and maintain good judgment throughout. Top marks. Thank you all.


우리 팀 여러분께: 이 시기에 관한 책들이 쓰일 것이라고 확신합니다. 그리고 그들이 가장 먼저 말하게 될 것은 전체 팀이 얼마나 훌륭했는지였습니다. 이제 이 모든 일을 겪으면서 우리는 단 한 명의 직원도 잃지 않았습니다. 여러분은 서로, 이 회사, 그리고 우리의 사명을 굳건히 지지했습니다. AGI를 안전하게 구축하는 팀에게 가장 중요한 것 중 하나는 스트레스가 많고 불확실한 상황을 처리하고 전반적으로 올바른 판단을 유지하는 능력입니다. 좋은 점수. 다들 감사 해요.


Satya, Kevin, Amy, and Brad have been incredible partners throughout this, with exactly the right priorities all the way through. They’ve had our backs and were ready to welcome all of us if we couldn’t achieve our primary goal. We clearly made the right choice to partner with Microsoft and I’m excited that our new board will include them as a non-voting observer. Thank you.


Satya, Kevin, Amy 및 Brad는 이 과정 전반에 걸쳐 정확한 우선순위를 가지고 놀라운 파트너였습니다. 그들은 우리를 지지해 주었고 우리가 주요 목표를 달성하지 못할 경우 우리 모두를 환영할 준비가 되어 있었습니다. 우리는 Microsoft와 협력하기로 한 올바른 선택을 했으며 새 이사회에서 Microsoft를 투표권 없는 참관인으로 포함하게 되어 기쁩니다. 감사합니다.


To our partners and users, thank you for sticking with us. We really felt the outpouring of support and love, and it helped all of us get through this. The fact that we did not lose a single customer will drive us to work even harder for you, and we are all excited to get back to work.


파트너와 사용자 여러분, 우리와 함께해주셔서 감사합니다. 우리는 정말 많은 지원과 사랑을 느꼈고, 이는 우리 모두가 이 상황을 극복하는 데 도움이 되었습니다. 우리가 단 한 명의 고객도 잃지 않았다는 사실은 우리가 귀하를 위해 더욱 열심히 일하도록 이끌 것이며, 우리 모두는 다시 일을 시작하게 되어 기쁩니다.


Will Hurd, Brian Chesky, Bret Taylor and Larry Summers put their lives on hold and did an incredible amount to support the mission. I don’t know how they did it so well, but they really did. Thank you.


윌 허드(Will Hurd), 브라이언 체스키(Brian Chesky), 브렛 테일러(Bret Taylor), 래리 서머스(Larry Summers)는 목숨을 걸고 임무를 지원하기 위해 엄청난 금액을 기부했습니다. 그들이 어떻게 그렇게 잘했는지는 모르겠지만 정말 그랬습니다. 감사합니다.


Ollie also put his life on hold this entire time to just do everything he could to help out, in addition to providing his usual unconditional love and support. Thank you and I love you.


Ollie는 평소의 무조건적인 사랑과 지원을 제공하는 것 외에도 도움을 주기 위해 할 수 있는 모든 일을 하기 위해 이번 내내 자신의 삶을 보류했습니다. 감사하고 사랑합니다.



So what’s next?


We have three immediate priorities.


우리에게는 세 가지 우선순위가 있습니다.


Advancing our research plan and further investing in our full-stack safety efforts, which have always been critical to our work. Our research roadmap is clear; this was a wonderfully focusing time. I share the excitement you all feel; we will turn this crisis into an opportunity! I’ll work with Mira on this.


연구 계획을 발전시키고 항상 우리 업무에 중요한 전체 스택 안전 노력에 추가로 투자합니다. 우리의 연구 로드맵은 명확합니다. 정말 집중할 수 있는 시간이었습니다. 저는 여러분 모두가 느끼는 흥분을 공유합니다. 이 위기를 기회로 바꾸겠습니다! 이 문제에 대해서는 Mira와 협력하겠습니다.


Continuing to improve and deploy our products and serve our customers. It’s important that people get to experience the benefits and promise of AI, and have the opportunity to shape it. We continue to believe that great products are the best way to do this. I’ll work with Brad, Jason and Anna to ensure our unwavering commitment to users, customers, partners and governments around the world is clear.


지속적으로 제품을 개선하고 배포하며 고객에게 서비스를 제공합니다. 사람들이 AI의 이점과 가능성을 경험하고 AI를 형성할 기회를 갖는 것이 중요합니다. 우리는 훌륭한 제품이 이를 수행하는 가장 좋은 방법이라고 계속 믿습니다. 저는 Brad, Jason, Anna와 협력하여 전 세계 사용자, 고객, 파트너 및 정부에 대한 우리의 변함없는 약속을 확실히 할 것입니다.


Bret, Larry, and Adam will be working very hard on the extremely important task of building out a board of diverse perspectives, improving our governance structure and overseeing an independent review of recent events. I look forward to working closely with them on these crucial steps so everyone can be confident in the stability of OpenAI. 


Bret, Larry 및 Adam은 다양한 관점의 이사회를 구축하고 거버넌스 구조를 개선하며 최근 사건에 대한 독립적인 검토를 감독하는 매우 중요한 작업을 수행하기 위해 열심히 노력할 것입니다. 모든 사람이 OpenAI의 안정성에 확신을 가질 수 있도록 이러한 중요한 단계에서 그들과 긴밀히 협력할 수 있기를 기대합니다.


I am so looking forward to finishing the job of building beneficial AGI with you all—best team in the world, best mission in the world.


세계 최고의 팀, 세계 최고의 미션인 유익한 AGI 구축 작업을 여러분과 함께 마무리할 수 있기를 기대합니다.




Message from Bret to the company


On behalf of the OpenAI Board, I want to express our gratitude to the entire OpenAI community, especially all the OpenAI employees, who came together to help find a path forward for the company over the past week. Your efforts helped enable this incredible organization to continue to serve its mission to ensure that artificial general intelligence benefits all of humanity. We are thrilled that Sam, Mira and Greg are back together leading the company and driving it forward. We look forward to working with them and all of you. 


OpenAI 이사회를 대신하여 저는 전체 OpenAI 커뮤니티, 특히 지난 주 동안 회사가 나아갈 길을 찾기 위해 함께 모인 모든 OpenAI 직원들에게 감사의 말씀을 전하고 싶습니다. 귀하의 노력은 이 놀라운 조직이 인공 일반 지능이 모든 인류에게 이익이 되도록 보장하는 임무를 계속 수행할 수 있도록 도왔습니다. Sam, Mira, Greg가 다시 함께 회사를 이끌고 발전시켜 나가게 되어 매우 기쁩니다. 우리는 그들과 여러분 모두와 함께 일할 수 있기를 기대합니다.


As a Board, we are focused on strengthening OpenAI’s corporate governance. Here’s how we plan to do it:


이사회로서 우리는 OpenAI의 기업 지배구조를 강화하는 데 중점을 두고 있습니다. 우리가 계획하는 방법은 다음과 같습니다.


  • We will build a qualified, diverse Board of exceptional individuals whose collective experience represents the breadth of OpenAI’s mission – from technology to safety to policy. We are pleased that this Board will include a non-voting observer for Microsoft.
  • 우리는 기술에서 안전, 정책에 이르기까지 OpenAI의 사명을 폭넓게 대표하는 집단 경험을 갖춘 뛰어난 개인들로 구성된 자격을 갖춘 다양한 이사회를 구성할 것입니다. 이 이사회에 투표권이 없는 Microsoft 참관인이 포함되어 기쁘게 생각합니다.
  • We will further stabilize the OpenAI organization so that we can continue to serve our mission.  This will include convening an independent committee of the Board to oversee a review of the recent events.
  • 우리는 계속해서 사명을 수행할 수 있도록 OpenAI 조직을 더욱 안정화할 것입니다. 여기에는 최근 사건에 대한 검토를 감독하기 위해 이사회의 독립 위원회를 소집하는 것이 포함됩니다.
  • We will enhance the governance structure of OpenAI so that all stakeholders – users, customers, employees, partners, and community members – can trust that OpenAI will continue to thrive.
  • 사용자, 고객, 직원, 파트너, 커뮤니티 구성원 등 모든 이해관계자가 OpenAI가 계속해서 성장할 것이라고 신뢰할 수 있도록 OpenAI의 거버넌스 구조를 강화할 것입니다.

OpenAI is a more important institution than ever before. ChatGPT has made artificial intelligence a part of daily life for hundreds of millions of people. Its popularity has made AI – its benefits and its risks – central to virtually every conversation about the future of governments, business, and society.


OpenAI는 그 어느 때보다 중요한 기관입니다. ChatGPT는 인공지능을 수억 명의 일상생활의 일부로 만들었습니다. AI의 인기로 인해 AI(AI의 이점과 위험)는 정부, 기업 및 사회의 미래에 관한 거의 모든 대화의 중심이 되었습니다.


We understand the gravity of these discussions and the central role of OpenAI in the development and safety of these awe-inspiring new technologies. Each of you plays a critical part in ensuring that we effectively meet these challenges.  We are committed to listening and learning from you, and I hope to speak with you all very soon.


우리는 이러한 논의의 중요성과 경외심을 불러일으키는 신기술의 개발 및 안전에 있어 OpenAI의 중심 역할을 이해하고 있습니다. 여러분 각자는 우리가 이러한 과제를 효과적으로 해결하는 데 중요한 역할을 합니다. 우리는 여러분의 의견을 듣고 배우기 위해 최선을 다하고 있으며 곧 여러분과 이야기를 나눌 수 있기를 바랍니다.


We are grateful to be a part of OpenAI, and excited to work with all of you.


OpenAI의 일원이 된 것을 감사하게 생각하며 여러분 모두와 함께 일하게 되어 기쁩니다.


Thank you,

Bret Taylor

Chair, OpenAI



















OpenAI announces leadership transition

Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company. Search process underway to identify permanent successor. The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing bod




Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company.

Search process underway to identify permanent successor.


최고 기술 책임자인 Mira Murati는 OpenAI를 이끌 임시 CEO로 임명되었습니다. 영구 후임자를 찾기 위한 검색 작업이 진행 중입니다. 샘 알트먼(Sam Altman)이 회사를 떠납니다. 


The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately.


모든 OpenAI 활동의 전반적인 관리 기관 역할을 하는 OpenAI, Inc.(501(c)(3))의 이사회는 오늘 Sam Altman이 CEO직을 떠나 이사회를 떠날 것이라고 발표했습니다. 회사의 최고 기술 책임자(CTO)인 미라 무라티(Mira Murati)가 임시 CEO로 즉각 취임할 예정입니.



A member of OpenAI’s leadership team for five years, Mira has played a critical role in OpenAI’s evolution into a global AI leader. She brings a unique skill set, understanding of the company’s values, operations, and business, and already leads the company’s research, product, and safety functions. Given her long tenure and close engagement with all aspects of the company, including her experience in AI governance and policy, the board believes she is uniquely qualified for the role and anticipates a seamless transition while it conducts a formal search for a permanent CEO.


5년 동안 OpenAI 리더십 팀의 일원이었던 Mira는 OpenAI가 글로벌 AI 리더로 발전하는 데 중요한 역할을 했습니다. 그녀는 독특한 기술과 회사의 가치, 운영 및 비즈니스에 대한 이해를 바탕으로 회사의 연구, 제품 및 안전 기능을 이끌고 있습니다. 그녀의 오랜 임기와 AI 거버넌스 및 정책 경험을 포함하여 회사의 모든 측면에 대한 긴밀한 참여를 고려할 때 이사회는 그녀가 해당 역할에 대한 고유한 자격을 갖추고 있다고 믿고 영구 CEO를 공식적으로 찾는 동안 원활한 전환을 기대합니다.


Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.


알트먼 씨의 사임은 이사회의 심의 검토 과정에 따른 것이며, 이사회는 그가 이사회와의 의사소통에 일관되게 솔직하지 않아 이사회의 책임 수행 능력을 방해한다는 결론을 내렸습니다. 이사회는 더 이상 OpenAI를 계속 이끌 수 있는 그의 능력에 대해 확신을 갖지 못합니다.



In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”


성명서에서 이사회는 “OpenAI는 인공 일반 지능이 모든 인류에게 이익이 되도록 보장한다는 우리의 사명을 발전시키기 위해 의도적으로 구성되었습니다. 이사회는 이 사명을 수행하기 위해 최선을 다하고 있습니다. OpenAI의 창립과 성장에 대한 Sam의 많은 기여에 감사드립니다. 동시에 우리는 앞으로 나아가기 위해서는 새로운 리더십이 필요하다고 믿습니다. 회사의 연구, 제품 및 안전 기능의 리더인 Mira는 임시 CEO 역할을 맡을 수 있는 특별한 자격을 갖추고 있습니다. 우리는 이 전환 기간 동안 OpenAI를 이끌 그녀의 능력에 대해 최고의 확신을 갖고 있습니다.”


OpenAI’s board of directors consists of OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.


OpenAI의 이사회는 OpenAI 수석 과학자 Ilya Sutskever, 사외 이사인 Quora CEO Adam D'Angelo, 기술 기업가 Tasha McCauley, Georgetown Center for Security and Emerging Technology의 Helen Toner로 구성되어 있습니다.


As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.


이러한 전환의 일환으로 Greg Brockman은 이사회 의장직에서 물러나며 회사에서 CEO에게 보고하는 역할을 맡게 됩니다.


OpenAI was founded as a non-profit in 2015 with the core mission of ensuring that artificial general intelligence benefits all of humanity. In 2019, OpenAI restructured to ensure that the company could raise capital in pursuit of this mission, while preserving the nonprofit's mission, governance, and oversight. The majority of the board is independent, and the independent directors do not hold equity in OpenAI. While the company has experienced dramatic growth, it remains the fundamental governance responsibility of the board to advance OpenAI’s mission and preserve the principles of its Charter.


OpenAI는 인공 일반 지능이 모든 인류에게 혜택을 제공한다는 핵심 사명을 가지고 2015년 비영리 단체로 설립되었습니다. 2019년에 OpenAI는 비영리 단체의 사명, 거버넌스 및 감독을 유지하면서 회사가 이 사명을 추구하기 위해 자본을 조달할 수 있도록 구조 조정했습니다. 이사회의 대다수는 독립적이며, 독립 이사는 OpenAI에서 지분을 보유하지 않습니다. 회사는 극적인 성장을 경험했지만 OpenAI의 사명을 발전시키고 헌장의 원칙을 보존하는 것은 이사회의 근본적인 거버넌스 책임으로 남아 있습니다.

















Nov. 9, 2023 - OpenAI Data Partnerships

2023. 11. 22. 11:51 | Posted by 솔웅




OpenAI Data Partnerships

Working together to create open-source and private datasets for AI training.



OpenAI Data Partnerships


Working together to create open-source and private datasets for AI training.



November 9, 2023


We are introducing OpenAI Data Partnerships, where we’ll work together with organizations to produce public and private datasets for training AI models.


우리는 조직과 협력하여 AI 모델 교육을 위한 공개 및 비공개 데이터 세트를 생성하는 OpenAI 데이터 파트너십을 도입합니다.


Modern AI technology learns skills and aspects of our world — of people, our motivations, interactions, and the way we communicate — by making sense of the data on which it’s trained. To ultimately make AGI that is safe and beneficial to all of humanity, we’d like AI models to deeply understand all subject matters, industries, cultures, and languages, which requires as broad a training dataset as possible. 


현대 AI 기술은 훈련된 데이터를 이해함으로써 사람, 동기, 상호 작용, 의사소통 방식 등 세상의 기술과 측면을 학습합니다. 궁극적으로 인류 모두에게 안전하고 유익한 AGI를 만들기 위해 우리는 AI 모델이 모든 주제, 산업, 문화 및 언어를 깊이 이해하기를 원하며, 이를 위해서는 가능한 한 광범위한 교육 데이터 세트가 필요합니다.


Including your content can make AI models more helpful to you by increasing their understanding of your domain. We’re already working with many partners who are eager to represent data from their country or industry. For example, we recently partnered with the Icelandic Government and Miðeind ehf to improve GPT-4’s ability to speak Icelandic by integrating their curated datasets. We also partnered with non-profit organization Free Law Project, which aims to democratize access to legal understanding by including their large collection of legal documents in AI training. We know there may be many more who also want to contribute to the future of AI research while discovering the potential of their unique data.


콘텐츠를 포함하면 도메인에 대한 이해도를 높여 AI 모델이 더욱 유용해질 수 있습니다. 우리는 이미 해당 국가나 업계의 데이터를 대표하고자 하는 많은 파트너와 협력하고 있습니다. 예를 들어, 우리는 최근 아이슬란드 정부 및 Miðeind ehf와 협력하여 선별된 데이터 세트를 통합하여 GPT-4의 아이슬란드어 말하기 능력을 향상시켰습니다. 우리는 또한 AI 교육에 대규모 법률 문서 컬렉션을 포함시켜 법적 이해에 대한 접근을 민주화하는 것을 목표로 하는 비영리 단체인 Free Law Project와 파트너십을 맺었습니다. 우리는 고유한 데이터의 잠재력을 발견하면서 AI 연구의 미래에 기여하고 싶어하는 사람들이 더 많이 있을 수 있다는 것을 알고 있습니다.


Data Partnerships are intended to enable more organizations to help steer the future of AI and benefit from models that are more useful to them, by including content they care about.


데이터 파트너십은 더 많은 조직이 관심 있는 콘텐츠를 포함하여 AI의 미래를 주도하고 더 유용한 모델의 혜택을 누릴 수 있도록 하기 위한 것입니다.


The kinds of data we’re seeking

We’re interested in large-scale datasets that reflect human society and that are not already easily accessible online to the public today. We can work with any modality, including text, images, audio, or video. We’re particularly looking for data that expresses human intention (e.g. long-form writing or conversations rather than disconnected snippets), across any language, topic, and format. 


우리는 인간 사회를 반영하고 오늘날 대중이 온라인으로 쉽게 접근할 수 없는 대규모 데이터 세트에 관심이 있습니다. 텍스트, 이미지, 오디오, 비디오 등 모든 형식으로 작업할 수 있습니다. 우리는 특히 모든 언어, 주제, 형식에 걸쳐 인간의 의도를 표현하는 데이터(예: 단절된 단편이 아닌 긴 형식의 글쓰기 또는 대화)를 찾고 있습니다.


We can work with data in almost any form and can use our next-generation in-house AI technology to help you digitize and structure your data. For example, we have world-class optical character recognition (OCR) technology to digitize files like PDFs, and automatic speech recognition (ASR) to transcribe spoken words. If the data needs cleaning (e.g. has lots of auto-generated artifacts or transcription errors), we can work with your team to process it into the most useful form. We are not seeking datasets with sensitive or personal information, or information that belongs to a third party; we can work with you to remove this information if you need help.


우리는 거의 모든 형태의 데이터로 작업할 수 있으며 차세대 사내 AI 기술을 사용하여 데이터를 디지털화하고 구조화할 수 있습니다. 예를 들어, 우리는 PDF와 같은 파일을 디지털화하는 세계 최고 수준의 광학 문자 인식(OCR) 기술과 음성을 텍스트로 변환하는 자동 음성 인식(ASR) 기술을 보유하고 있습니다. 데이터를 정리해야 하는 경우(예: 자동 생성된 아티팩트 또는 전사 오류가 많은 경우) 팀과 협력하여 가장 유용한 형식으로 처리할 수 있습니다. 우리는 민감한 개인정보 또는 제3자 소유의 정보가 포함된 데이터 세트를 찾고 있지 않습니다. 도움이 필요한 경우 당사는 귀하와 협력하여 이 정보를 제거할 수 있습니다.



Ways to partner with us

We currently have two ways to partner, and may expand in the future:


현재 파트너 관계를 맺는 방법에는 두 가지가 있으며 향후 확장될 수 있습니다.

  • Open-Source Archive: We’re seeking partners to help us create an open-source dataset for training language models. This dataset would be public for anyone to use in AI model training. We would also explore using it to safely train additional open-source models ourselves. We believe open-source plays an important role in the ecosystem.
  • 오픈 소스 아카이브: 우리는 언어 모델 훈련을 위한 오픈 소스 데이터 세트를 만드는 데 도움을 줄 파트너를 찾고 있습니다. 이 데이터 세트는 누구나 AI 모델 교육에 사용할 수 있도록 공개됩니다. 또한 이를 사용하여 추가적인 오픈 소스 모델을 직접 안전하게 교육하는 방법도 모색할 것입니다. 우리는 오픈 소스가 생태계에서 중요한 역할을 한다고 믿습니다.
  • Private Datasets: We are also preparing private datasets for training proprietary AI models, including our foundation models and fine-tuned and custom models. If you have data you wish to keep private, but you would like our AI models to have a better understanding of your domain (or you’d even just like to gauge the potential of your data to do so), this is the optimal way to partner. We’ll treat your data with the level of sensitivity and access controls that you prefer. 
  • 비공개 데이터세트: 우리는 또한 기초 모델과 미세 조정 및 사용자 정의 모델을 포함하여 독점 AI 모델을 교육하기 위한 비공개 데이터세트를 준비하고 있습니다. 비공개로 유지하고 싶은 데이터가 있지만 AI 모델이 도메인을 더 잘 이해하기를 원하는 경우(또는 그렇게 할 수 있는 데이터의 잠재력을 측정하고 싶은 경우) 이것이 최적의 방법입니다. 파트너에게. 우리는 귀하가 선호하는 민감도 및 액세스 제어 수준으로 귀하의 데이터를 처리합니다.

Overall, we are seeking partners who want to help us teach AI to understand our world in order to be maximally helpful to everyone. Together, we can move towards AGI that benefits all of humanity.


전반적으로 우리는 모든 사람에게 최대한 도움이 될 수 있도록 AI가 세상을 이해하도록 가르치는 데 도움을 주고 싶은 파트너를 찾고 있습니다. 우리는 함께 인류 모두에게 이익이 되는 AGI를 향해 나아갈 수 있습니다.








Nov. 6, 2023 - Introducing GPTs

2023. 11. 22. 11:46 | Posted by 솔웅




Introducing GPTs

You can now create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.



Introducing GPTs


You can now create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.






November 6, 2023


We’re rolling out custom versions of ChatGPT that you can create for a specific purpose—called GPTs. GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others. For example, GPTs can help you learn the rules to any board game, help teach your kids math, or design stickers.


특정 목적을 위해 생성할 수 있는 ChatGPT의 사용자 지정 버전(GPTs)이 출시됩니다. GPTs는 누구나 일상 생활, 특정 작업, 직장 또는 집에서 더 도움이 되도록 맞춤형 버전의 ChatGPT를 만들고 해당 창작물을 다른 사람들과 공유할 수 있는 새로운 방법입니다. 예를 들어, GPTs는 보드 게임의 규칙을 배우거나, 자녀에게 수학을 가르치거나, 스티커를 디자인하는 데 도움이 될 수 있습니다.


Anyone can easily build their own GPT—no coding is required. You can make them for yourself, just for your company’s internal use, or for everyone. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data. Try it out at chat.openai.com/create.


누구나 쉽게 자신만의 GPT를 구축할 수 있습니다. 코딩이 필요하지 않습니다. 회사 내부용으로만 사용할 수도 있고 모든 사람을 위해 만들 수도 있습니다. 대화를 시작하고, 지침과 추가 지식을 제공하고, 웹 검색, 이미지 만들기, 데이터 분석 등 할 수 있는 작업을 선택하는 것만큼 쉽습니다. chat.openai.com/create에서 사용해 보세요.


Example GPTs are available today for ChatGPT Plus and Enterprise users to try out including Canva and Zapier AI Actions. We plan to offer GPTs to more users soon.


ChatGPT Plus 및 Enterprise 사용자는 오늘부터 Canva 및 Zapier AI 작업을 포함한 예제 GPTs를 사용해 볼 수 있습니다. 우리는 곧 더 많은 사용자에게 GPTs를 제공할 계획입니다.


Learn more about our OpenAI DevDay announcements for new models and developer products.


새로운 모델 및 개발자 제품에 대한 것들은 OpenAI DevDay 에서의 발표에 대해 자세히 알아보세요.


GPTs let you customize ChatGPT for a specific purpose

Since launching ChatGPT people have been asking for ways to customize ChatGPT to fit specific ways that they use it. We launched Custom Instructions in July that let you set some preferences, but requests for more control kept coming. Many power users maintain a list of carefully crafted prompts and instruction sets, manually copying them into ChatGPT. GPTs now do all of that for you.


ChatGPT를 출시한 이후 사람들은 ChatGPT를 특정 사용 방식에 맞게 사용자 정의할 수 있는 방법을 요청해 왔습니다. 우리는 몇 가지 기본 설정을 지정할 수 있는 맞춤형 지침을 7월에 출시했지만 더 많은 제어 기능에 대한 요청이 계속해서 접수되었습니다. 많은 고급 사용자는 신중하게 제작된 프롬프트 및 지침 세트 목록을 유지 관리하고 이를 수동으로 ChatGPT에 복사합니다. 이제 GPTs가 이 모든 것을 대신해 드립니다.


The best GPTs will be invented by the community

We believe the most incredible GPTs will come from builders in the community. Whether you’re an educator, coach, or just someone who loves to build helpful tools, you don’t need to know coding to make one and share your expertise.


우리는 가장 놀라운 GPTs가 커뮤니티의 빌더들로부터 나올 것이라고 믿습니다. 교육자, 코치 또는 유용한 도구를 만드는 것을 좋아하는 사람이든 관계없이 도구를 만들고 전문 지식을 공유하기 위해 코딩을 알 필요는 없습니다.


The GPT Store is rolling out later this month

Starting today, you can create GPTs and share them publicly. Later this month, we’re launching the GPT Store, featuring creations by verified builders. Once in the store, GPTs become searchable and may climb the leaderboards. We will also spotlight the most useful and delightful GPTs we come across in categories like productivity, education, and “just for fun”. In the coming months, you’ll also be able to earn money based on how many people are using your GPT.


오늘부터 GPTs를 생성하고 공개적으로 공유할 수 있습니다. 이번 달 말에는 검증된 제작자의 창작물을 선보이는 GPT 스토어를 출시할 예정입니다. 매장에 들어가면 GPTs를 검색할 수 있으며 순위표에 오를 수 있습니다. 또한 생산성, 교육, '재미를 위한' 카테고리에서 가장 유용하고 즐거운 GPTs를 집중 조명할 것입니다. 앞으로 몇 달 안에 GPT를 사용하는 사람 수에 따라 수익을 얻을 수도 있습니다.


We built GPTs with privacy and safety in mind

As always, you are in control of your data with ChatGPT. Your chats with GPTs are not shared with builders. If a GPT uses third party APIs, you choose whether data can be sent to that API. When builders customize their own GPT with actions or knowledge, the builder can choose if user chats with that GPT can be used to improve and train our models. These choices build upon the existing privacy controls users have, including the option to opt your entire account out of model training. 


언제나 그렇듯이 ChatGPT를 사용하여 데이터를 제어할 수 있습니다. GPTs와의 채팅은 빌더와 공유되지 않습니다. GPTs가 타사 API를 사용하는 경우 해당 API로 데이터를 전송할 수 있는지 여부를 선택합니다. 빌더가 작업이나 지식으로 자신의 GPT를 맞춤설정할 때 빌더는 해당 GPT와의 사용자 채팅을 사용하여 모델을 개선하고 교육할 수 있는지 여부를 선택할 수 있습니다. 이러한 선택은 전체 계정을 모델 교육에서 제외하는 옵션을 포함하여 사용자가 보유한 기존 개인 정보 보호 제어를 기반으로 합니다.


We’ve set up new systems to help review GPTs against our usage policies. These systems stack on top of our existing mitigations and aim to prevent users from sharing harmful GPTs, including those that involve fraudulent activity, hateful content, or adult themes. We’ve also taken steps to build user trust by allowing builders to verify their identity. We'll continue to monitor and learn how people use GPTs and update and strengthen our safety mitigations. If you have concerns with a specific GPT, you can also use our reporting feature on the GPT shared page to notify our team.


우리는 사용 정책에 따라 GPTs를 검토하는 데 도움이 되는 새로운 시스템을 설정했습니다. 이러한 시스템은 기존 완화 조치에 더해 사용자가 사기 행위, 증오성 콘텐츠, 성인용 테마와 관련된 유해한 GPTs를 공유하는 것을 방지하는 것을 목표로 합니다. 또한 건축업자가 자신의 신원을 확인할 수 있도록 허용하여 사용자 신뢰를 구축하기 위한 조치를 취했습니다. 우리는 사람들이 GPTs를 어떻게 사용하는지 계속 모니터링하고 학습하며 안전 완화 조치를 업데이트하고 강화할 것입니다. 특정 GPT에 대해 우려사항이 있는 경우 GPTs 공유 페이지의 신고 기능을 사용하여 우리의 팀에 알릴 수도 있습니다.


GPTs will continue to get more useful and smarter, and you’ll eventually be able to let them take on real tasks in the real world. In the field of AI, these systems are often discussed as “agents”. We think it’s important to move incrementally towards this future, as it will require careful technical and safety work—and time for society to adapt. We have been thinking deeply about the societal implications and will have more analysis to share soon.


GPTs는 계속해서 더욱 유용하고 스마트해질 것이며, 결국 GPT가 현실 세계에서 실제 작업을 수행하도록 할 수 있게 될 것입니다. AI 분야에서 이러한 시스템은 종종 "에이전트"로 논의됩니다. 우리는 이 미래를 향해 점진적으로 나아가는 것이 중요하다고 생각합니다. 왜냐하면 세심한 기술 및 안전 작업과 사회가 적응할 시간이 필요하기 때문입니다. 우리는 사회적 영향에 대해 깊이 생각해 왔으며 곧 더 많은 분석을 공유할 예정입니다.


Developers can connect GPTs to the real world

In addition to using our built-in capabilities, you can also define custom actions by making one or more APIs available to the GPT. Like plugins, actions allow GPTs to integrate external data or interact with the real-world. Connect GPTs to databases, plug them into emails, or make them your shopping assistant. For example, you could integrate a travel listings database, connect a user’s email inbox, or facilitate e-commerce orders.

The design of actions builds upon insights from our plugins beta, granting developers greater control over the model and how their APIs are called. Migrating from the plugins beta is easy with the ability to use your existing plugin manifest to define actions for your GPT.


내장된 기능을 사용하는 것 외에도 GPTs에서 하나 이상의 API를 사용할 수 있도록 하여 맞춤 작업을 정의할 수도 있습니다. 플러그인과 마찬가지로 작업을 통해 GPTs는 외부 데이터를 통합하거나 실제 세계와 상호 작용할 수 있습니다. GPTs를 데이터베이스에 연결하거나, 이메일에 연결하거나, 쇼핑 도우미로 활용하세요. 예를 들어 여행 목록 데이터베이스를 통합하거나, 사용자의 이메일 받은 편지함을 연결하거나, 전자 상거래 주문을 촉진할 수 있습니다.


Enterprise customers can deploy internal-only GPTs


Since we launched ChatGPT Enterprise a few months ago, early customers have expressed the desire for even more customization that aligns with their business. GPTs answer this call by allowing you to create versions of ChatGPT for specific use cases, departments, or proprietary datasets. Early customers like Amgen, Bain, and Square are already leveraging internal GPTs to do things like craft marketing materials embodying their brand, aid support staff with answering customer questions, or help new software engineers with onboarding.


몇 달 전 ChatGPT Enterprise를 출시한 이후 초기 고객들은 자신의 비즈니스에 맞는 더 많은 사용자 정의를 원했습니다. GPTs는 특정 사용 사례, 부서 또는 독점 데이터 세트에 대한 ChatGPT 버전을 생성할 수 있도록 하여 이 요청에 응답합니다. Amgen, Bain, Square와 같은 초기 고객은 이미 내부 GPTs를 활용하여 브랜드를 구현하는 마케팅 자료 제작, 지원 직원의 고객 질문 답변 지원, 신규 소프트웨어 엔지니어의 온보딩 지원 등의 작업을 수행하고 있습니다.


Enterprises can get started with GPTs on Wednesday. You can now empower users inside your company to design internal-only GPTs without code and securely publish them to your workspace. The admin console lets you choose how GPTs are shared and whether external GPTs may be used inside your business. Like all usage on ChatGPT Enterprise, we do not use your conversations with GPTs to improve our models.


기업은 수요일부터 GPTs를 시작할 수 있습니다. 이제 회사 내부 사용자가 코드 없이 내부 전용 GPTs를 설계하고 작업공간에 안전하게 게시할 수 있는 권한을 부여할 수 있습니다. 관리 콘솔을 사용하면 GPTs 공유 방법과 외부 GPTs를 비즈니스 내에서 사용할 수 있는지 여부를 선택할 수 있습니다. ChatGPT Enterprise의 모든 사용과 마찬가지로 우리는 모델을 개선하기 위해 GPTs와의 대화를 사용하지 않습니다.


We want more people to shape how AI behaves

We designed GPTs so more people can build with us. Involving the community is critical to our mission of building safe AGI that benefits humanity. It allows everyone to see a wide and varied range of useful GPTs and get a more concrete sense of what’s ahead. And by broadening the group of people who decide 'what to build' beyond just those with access to advanced technology it's likely we'll have safer and better aligned AI. The same desire to build with people, not just for them, drove us to launch the OpenAI API and to research methods for incorporating democratic input into AI behavior, which we plan to share more about soon.


우리는 더 많은 사람들이 우리와 함께 구축할 수 있도록 GPTs를 설계했습니다. 커뮤니티의 참여는 인류에게 이익이 되는 안전한 AGI를 구축하려는 우리의 사명에 매우 중요합니다. 이를 통해 모든 사람은 광범위하고 다양한 범위의 유용한 GPTs를 확인하고 앞으로의 상황에 대해 보다 구체적인 감각을 얻을 수 있습니다. 그리고 첨단 기술에 접근할 수 있는 사람들을 넘어 '무엇을 구축할지'를 결정하는 사람들의 그룹을 확대함으로써 우리는 더 안전하고 더 나은 AI를 갖게 될 가능성이 높습니다. 사람들을 위한 것이 아니라 사람들과 함께 구축하려는 동일한 열망으로 인해 우리는 OpenAI API를 출시하고 AI 행동에 민주적 입력을 통합하는 방법을 연구하게 되었으며 이에 대해 곧 더 자세히 공유할 계획입니다.


We’ve made ChatGPT Plus fresher and simpler to use

Finally, ChatGPT Plus now includes fresh information up to April 2023. We’ve also heard your feedback about how the model picker is a pain. Starting today, no more hopping between models; everything you need is in one place. You can access DALL·E, browsing, and data analysis all without switching. You can also attach files to let ChatGPT search PDFs and other document types. Find us at chatgpt.com.


마지막으로 ChatGPT Plus에는 이제 2023년 4월까지의 최신 정보가 포함됩니다. 모델 선택기가 얼마나 어려운지에 대한 피드백도 들었습니다. 오늘부터 더 이상 모델 간에 이동하지 않아도 됩니다. 필요한 모든 것이 한 곳에 있습니다. 전환 없이 DALL·E, 브라우징, 데이터 분석에 모두 액세스할 수 있습니다. ChatGPT가 PDF 및 기타 문서 유형을 검색할 수 있도록 파일을 첨부할 수도 있습니다. chatgpt.com에서 우리를 찾아보세요.








New models and developer products announced at DevDay

GPT-4 Turbo with 128K context and lower prices, the new Assistants API, GPT-4 Turbo with Vision, DALL·E 3 API, and more.




New models and developer products announced at DevDay


GPT-4 Turbo with 128K context and lower prices, the new Assistants API, GPT-4 Turbo with Vision, DALL·E 3 API, and more.




November 6, 2023



Today, we shared dozens of new additions and improvements, and reduced pricing across many parts of our platform. These include:


오늘 우리는 플랫폼의 여러 부분에 걸쳐 수십 가지의 새로운 추가 및 개선 사항과 가격 인하를 공유했습니다. 여기에는 다음이 포함됩니다.


  • New GPT-4 Turbo model that is more capable, cheaper and supports a 128K context window
  • 더 유능하고 저렴하며 128K 컨텍스트 창을 지원하는 새로운 GPT-4 Turbo 모델
  • New Assistants API that makes it easier for developers to build their own assistive AI apps that have goals and can call models and tools
  • 개발자가 목표가 있고 모델과 도구를 호출할 수 있는 자체 보조 AI 앱을 더 쉽게 구축할 수 있게 해주는 새로운 Assistant API
  • New multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS)
  • 비전, 이미지 생성(DALL·E 3) 및 TTS(텍스트 음성 변환)를 포함한 플랫폼의 새로운 다중 모드 기능

We’ll begin rolling out new features to OpenAI customers starting at 1pm PT today.


오늘 오후 1시(태평양 표준시)부터 OpenAI 고객에게 새로운 기능을 선보일 예정입니다.


Learn more about OpenAI DevDay announcements for ChatGPT.


ChatGPT에 대한 OpenAI DevDay 공지사항에 대해 자세히 알아보세요.


GPT-4 Turbo with 128K context

We released the first version of GPT-4 in March and made GPT-4 generally available to all developers in July. Today we’re launching a preview of the next generation of this model, GPT-4 Turbo


우리는 3월에 GPT-4의 첫 번째 버전을 출시했으며 7월에 모든 개발자가 GPT-4를 일반적으로 사용할 수 있도록 했습니다. 오늘 우리는 이 모델의 차세대 모델인 GPT-4 Turbo의 미리보기를 출시합니다.


GPT-4 Turbo is more capable and has knowledge of world events up to April 2023. It has a 128k context window so it can fit the equivalent of more than 300 pages of text in a single prompt. We also optimized its performance so we are able to offer GPT-4 Turbo at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4.


GPT-4 Turbo는 더 많은 능력을 갖추고 2023년 4월까지의 세계 사건에 대한 지식을 보유하고 있습니다. 128k 컨텍스트 창이 있으므로 단일 프롬프트에 300페이지 이상의 텍스트에 해당하는 내용을 넣을 수 있습니다. 또한 GPT-4에 비해 입력 토큰의 경우 3배, 출력 토큰의 경우 2배 저렴한 가격으로 GPT-4 Turbo를 제공할 수 있도록 성능을 최적화했습니다.


GPT-4 Turbo is available for all paying developers to try by passing gpt-4-1106-preview in the API and we plan to release the stable production-ready model in the coming weeks.


GPT-4 Turbo는 모든 유료 개발자가 API에서 gpt-4-1106-preview를 전달하여 사용해 볼 수 있으며 앞으로 몇 주 안에 안정적인 프로덕션 준비 모델을 출시할 계획입니다.


Function calling updates

Function calling lets you describe functions of your app or external APIs to models, and have the model intelligently choose to output a JSON object containing arguments to call those functions. We’re releasing several improvements today, including the ability to call multiple functions in a single message: users can send one message requesting multiple actions, such as “open the car window and turn off the A/C”, which would previously require multiple roundtrips with the model (learn more). We are also improving function calling accuracy: GPT-4 Turbo is more likely to return the right function parameters.


함수 호출을 사용하면 앱 또는 외부 API의 기능을 모델에 설명하고 모델이 이러한 함수를 호출하기 위한 인수가 포함된 JSON 개체를 출력하도록 지능적으로 선택하도록 할 수 있습니다. 우리는 오늘 단일 메시지로 여러 기능을 호출하는 기능을 포함하여 몇 가지 개선 사항을 출시합니다. 사용자는 이전에는 여러 번 필요했던 "차 창문을 열고 에어컨을 끄세요"와 같은 여러 작업을 요청하는 하나의 메시지를 보낼 수 있습니다. 모델과의 왕복 여행(자세히 알아보기) 또한 함수 호출 정확도도 향상되었습니다. GPT-4 Turbo는 올바른 함수 매개변수를 반환할 가능성이 더 높습니다.


Improved instruction following and JSON mode

GPT-4 Turbo performs better than our previous models on tasks that require the careful following of instructions, such as generating specific formats (e.g., “always respond in XML”). It also supports our new JSON mode, which ensures the model will respond with valid JSON. The new API parameter response_format enables the model to constrain its output to generate a syntactically correct JSON object. JSON mode is useful for developers generating JSON in the Chat Completions API outside of function calling.


GPT-4 Turbo는 특정 형식 생성(예: "항상 XML로 응답")과 같이 지침을 주의 깊게 따라야 하는 작업에서 이전 모델보다 더 나은 성능을 발휘합니다. 또한 모델이 유효한 JSON으로 응답하도록 보장하는 새로운 JSON 모드도 지원합니다. 새로운 API 매개변수인 response_format을 사용하면 모델이 구문적으로 올바른 JSON 개체를 생성하도록 출력을 제한할 수 있습니다. JSON 모드는 함수 호출 외부에서 Chat Completions API에서 JSON을 생성하는 개발자에게 유용합니다.


Reproducible outputs and log probabilities

The new seed parameter enables reproducible outputs by making the model return consistent completions most of the time. This beta feature is useful for use cases such as replaying requests for debugging, writing more comprehensive unit tests, and generally having a higher degree of control over the model behavior. We at OpenAI have been using this feature internally for our own unit tests and have found it invaluable. We’re excited to see how developers will use it. Learn more.


새로운 시드 매개변수는 모델이 대부분의 경우 일관된 완료를 반환하도록 하여 재현 가능한 출력을 가능하게 합니다. 이 베타 기능은 디버깅 요청 재생, 보다 포괄적인 단위 테스트 작성, 일반적으로 모델 동작에 대한 더 높은 수준의 제어와 같은 사용 사례에 유용합니다. OpenAI에서는 자체 단위 테스트를 위해 이 기능을 내부적으로 사용해 왔으며 이 기능이 매우 중요하다는 것을 알았습니다. 개발자들이 이를 어떻게 사용할지 기대됩니다. 더 알아보기.


We’re also launching a feature to return the log probabilities for the most likely output tokens generated by GPT-4 Turbo and GPT-3.5 Turbo in the next few weeks, which will be useful for building features such as autocomplete in a search experience.


또한 앞으로 몇 주 안에 GPT-4 Turbo 및 GPT-3.5 Turbo에서 생성된 가장 가능성이 높은 출력 토큰에 대한 로그 확률을 반환하는 기능을 출시할 예정입니다. 이는 검색 환경에서 자동 완성과 같은 기능을 구축하는 데 유용할 것입니다.


Updated GPT-3.5 Turbo

In addition to GPT-4 Turbo, we are also releasing a new version of GPT-3.5 Turbo that supports a 16K context window by default. The new 3.5 Turbo supports improved instruction following, JSON mode, and parallel function calling. For instance, our internal evals show a 38% improvement on format following tasks such as generating JSON, XML and YAML. Developers can access this new model by calling gpt-3.5-turbo-1106 in the API. Applications using the gpt-3.5-turbo name will automatically be upgraded to the new model on December 11. Older models will continue to be accessible by passing gpt-3.5-turbo-0613 in the API until June 13, 2024. Learn more.


GPT-4 Turbo 외에도 기본적으로 16K 컨텍스트 창을 지원하는 GPT-3.5 Turbo의 새 버전도 출시하고 있습니다. 새로운 3.5 Turbo는 향상된 명령 따르기, JSON 모드 및 병렬 함수 호출을 지원합니다. 예를 들어 내부 평가에서는 JSON, XML, YAML 생성과 같은 작업에 따른 형식이 38% 개선된 것으로 나타났습니다. 개발자는 API에서 gpt-3.5-turbo-1106을 호출하여 이 새로운 모델에 액세스할 수 있습니다. gpt-3.5-turbo 이름을 사용하는 애플리케이션은 12월 11일에 새 모델로 자동 업그레이드됩니다. 이전 모델은 2024년 6월 13일까지 API에서 gpt-3.5-turbo-0613을 전달하여 계속 액세스할 수 있습니다. 자세히 알아보기


Assistants API, Retrieval, and Code Interpreter

Today, we’re releasing the Assistants API, our first step towards helping developers build agent-like experiences within their own applications. An assistant is a purpose-built AI that has specific instructions, leverages extra knowledge, and can call models and tools to perform tasks. The new Assistants API provides new capabilities such as Code Interpreter and Retrieval as well as function calling to handle a lot of the heavy lifting that you previously had to do yourself and enable you to build high-quality AI apps.


오늘 우리는 개발자가 자신의 애플리케이션 내에서 에이전트와 같은 경험을 구축할 수 있도록 돕기 위한 첫 번째 단계인 Assistants API를 출시합니다. 어시스턴트는 특정 지침이 있고, 추가 지식을 활용하며, 작업을 수행하기 위해 모델과 도구를 호출할 수 있는 특수 목적의 AI입니다. 새로운 Assistants API는 코드 해석기 및 검색과 같은 새로운 기능과 함수 호출을 제공하여 이전에 직접 수행해야 했던 많은 무거운 작업을 처리하고 고품질 AI 앱을 구축할 수 있도록 해줍니다.


This API is designed for flexibility; use cases range from a natural language-based data analysis app, a coding assistant, an AI-powered vacation planner, a voice-controlled DJ, a smart visual canvas—the list goes on. The Assistants API is built on the same capabilities that enable our new GPTs product: custom instructions and tools such as Code interpreter, Retrieval, and function calling.


이 API는 유연성을 위해 설계되었습니다. 사용 사례는 자연어 기반 데이터 분석 앱, 코딩 도우미, AI 기반 휴가 플래너, 음성 제어 DJ, 스마트 시각적 캔버스에 이르기까지 다양합니다. Assistants API는 새로운 GPT 제품을 활성화하는 것과 동일한 기능, 즉 코드 해석기, 검색, 함수 호출과 같은 맞춤 지침 및 도구를 기반으로 구축되었습니다.


A key change introduced by this API is persistent and infinitely long threads, which allow developers to hand off thread state management to OpenAI and work around context window constraints. With the Assistants API, you simply add each new message to an existing thread.


이 API에 의해 도입된 주요 변경 사항은 개발자가 스레드 상태 관리를 OpenAI에 넘겨주고 컨텍스트 창 제약 조건을 해결할 수 있도록 하는 지속적이고 무한히 긴 스레드입니다. Assistants API를 사용하면 각각의 새 메시지를 기존 스레드에 추가하기만 하면 됩니다.


Assistants also have access to call new tools as needed, including:


보조자는 필요에 따라 다음을 포함한 새로운 도구를 호출할 수도 있습니다.


  • Code Interpreter: writes and runs Python code in a sandboxed execution environment, and can generate graphs and charts, and process files with diverse data and formatting. It allows your assistants to run code iteratively to solve challenging code and math problems, and more.
  • 코드 해석기: 샌드박스 실행 환경에서 Python 코드를 작성 및 실행하고, 그래프와 차트를 생성하고, 다양한 데이터와 형식이 포함된 파일을 처리할 수 있습니다. 이를 통해 어시스턴트는 코드를 반복적으로 실행하여 까다로운 코드 및 수학 문제 등을 해결할 수 있습니다.
  • Retrieval: augments the assistant with knowledge from outside our models, such as proprietary domain data, product information or documents provided by your users. This means you don’t need to compute and store embeddings for your documents, or implement chunking and search algorithms. The Assistants API optimizes what retrieval technique to use based on our experience building knowledge retrieval in ChatGPT.
  • 검색: 독점 도메인 데이터, 제품 정보 또는 사용자가 제공한 문서와 같은 모델 외부의 지식으로 어시스턴트를 강화합니다. 즉, 문서에 대한 임베딩을 계산하고 저장할 필요가 없으며 청크 분할 및 검색 알고리즘을 구현할 필요가 없습니다. Assistants API는 ChatGPT에서 지식 검색을 구축한 경험을 바탕으로 사용할 검색 기술을 최적화합니다.
  • Function calling: enables assistants to invoke functions you define and incorporate the function response in their messages.
  • 함수 호출: 어시스턴트가 사용자가 정의한 함수를 호출하고 메시지에 함수 응답을 통합할 수 있습니다.

As with the rest of the platform, data and files passed to the OpenAI API are never used to train our models and developers can delete the data when they see fit.


플랫폼의 나머지 부분과 마찬가지로 OpenAI API에 전달된 데이터와 파일은 모델을 교육하는 데 사용되지 않으며 개발자는 적절하다고 판단되는 경우 데이터를 삭제할 수 있습니다.


You can try the Assistants API beta without writing any code by heading to the Assistants playground.


Assistants 놀이터로 이동하면 코드를 작성하지 않고도 Assistants API 베타를 사용해 볼 수 있습니다.






The Assistants API is in beta and available to all developers starting today. Please share what you build with us (@OpenAI) along with your feedback which we will incorporate as we continue building over the coming weeks. Pricing for the Assistants APIs and its tools is available on our pricing page.


Assistants API는 베타 버전이며 오늘부터 모든 개발자가 사용할 수 있습니다. 앞으로 몇 주 동안 계속해서 구축하면서 반영할 피드백과 함께 여러분이 구축한 내용을 우리(@OpenAI)와 공유해 주세요. Assistants API 및 해당 도구의 가격은 가격 페이지에서 확인할 수 있습니다.


New modalities in the API

GPT-4 Turbo with vision

GPT-4 Turbo can accept images as inputs in the Chat Completions API, enabling use cases such as generating captions, analyzing real world images in detail, and reading documents with figures. For example, BeMyEyes uses this technology to help people who are blind or have low vision with daily tasks like identifying a product or navigating a store. Developers can access this feature by using gpt-4-vision-preview in the API. We plan to roll out vision support to the main GPT-4 Turbo model as part of its stable release. Pricing depends on the input image size. For instance, passing an image with 1080×1080 pixels to GPT-4 Turbo costs $0.00765. Check out our vision guide.


GPT-4 Turbo는 이미지를 Chat Completions API의 입력으로 받아들여 캡션 생성, 실제 이미지 세부 분석, 그림이 포함된 문서 읽기 등의 사용 사례를 지원합니다. 예를 들어, BeMyEyes는 이 기술을 사용하여 시각 장애가 있거나 시력이 낮은 사람들이 제품 식별이나 매장 탐색과 같은 일상 업무를 수행할 수 있도록 돕습니다. 개발자는 API에서 gpt-4-vision-preview를 사용하여 이 기능에 액세스할 수 있습니다. 우리는 안정적인 릴리스의 일부로 주요 GPT-4 Turbo 모델에 비전 지원을 출시할 계획입니다. 가격은 입력 이미지 크기에 따라 다릅니다. 예를 들어 1080×1080 픽셀의 이미지를 GPT-4 Turbo로 전달하는 데 드는 비용은 $0.00765입니다. 비전 가이드를 확인해 보세요.



Developers can integrate DALL·E 3, which we recently launched to ChatGPT Plus and Enterprise users, directly into their apps and products through our Images API by specifying dall-e-3 as the model. Companies like Snap, Coca-Cola, and Shutterstock have used DALL·E 3 to programmatically generate images and designs for their customers and campaigns. Similar to the previous version of DALL·E, the API incorporates built-in moderation to help developers protect their applications against misuse. We offer different format and quality options, with prices starting at $0.04 per image generated. Check out our guide to getting started with DALL·E 3 in the API.


개발자는 dall-e-3를 모델로 지정하여 Images API를 통해 최근 ChatGPT Plus 및 Enterprise 사용자에게 출시된 DALL·E 3를 앱과 제품에 직접 통합할 수 있습니다. Snap, Coca-Cola, Shutterstock과 같은 회사에서는 DALL·E 3를 사용하여 고객과 캠페인을 위한 이미지와 디자인을 프로그래밍 방식으로 생성했습니다. 이전 버전의 DALL·E와 마찬가지로 API에는 개발자가 응용 프로그램을 오용으로부터 보호할 수 있도록 조정 기능이 내장되어 있습니다. 우리는 생성된 이미지당 $0.04부터 시작하는 가격으로 다양한 형식과 품질 옵션을 제공합니다. API에서 DALL·E 3을 시작하는 방법에 대한 가이드를 확인하세요.



Text-to-speech (TTS)

Developers can now generate human-quality speech from text via the text-to-speech API. Our new TTS model offers six preset voices to choose from and two model variants, tts-1 and tts-1-hd. tts is optimized for real-time use cases and tts-1-hd is optimized for quality. Pricing starts at $0.015 per input 1,000 characters. Check out our TTS guide to get started.


이제 개발자는 텍스트 음성 변환 API를 통해 텍스트에서 인간 수준의 음성을 생성할 수 있습니다. 새로운 TTS 모델은 선택할 수 있는 6개의 사전 설정 음성과 2개의 모델 변형인 tts-1 및 tts-1-hd를 제공합니다. tts는 실시간 사용 사례에 최적화되어 있고 tts-1-hd는 품질에 최적화되어 있습니다. 가격은 입력 1,000자당 $0.015부터 시작됩니다. 시작하려면 TTS 가이드를 확인하세요.


Listen to voice samples

Select text  Scenic  Directions  Technical  Recipe

As the golden sun dips below the horizon, casting long shadows across the tranquil meadow, the world seems to hush, and a sense of calmness envelops the Earth, promising a peaceful night’s rest for all living beings.








Model customization

GPT-4 fine tuning experimental access

We’re creating an experimental access program for GPT-4 fine-tuning. Preliminary results indicate that GPT-4 fine-tuning requires more work to achieve meaningful improvements over the base model compared to the substantial gains realized with GPT-3.5 fine-tuning. As quality and safety for GPT-4 fine-tuning improves, developers actively using GPT-3.5 fine-tuning will be presented with an option to apply to the GPT-4 program within their fine-tuning console.


우리는 GPT-4 미세 조정을 위한 실험적인 액세스 프로그램을 만들고 있습니다. 예비 결과에 따르면 GPT-4 미세 조정은 GPT-3.5 미세 조정을 통해 실현된 상당한 이득에 비해 기본 모델에 비해 의미 있는 개선을 달성하기 위해 더 많은 작업이 필요합니다. GPT-4 미세 조정의 품질과 안전성이 향상됨에 따라 GPT-3.5 미세 조정을 적극적으로 사용하는 개발자에게는 미세 조정 콘솔 내에서 GPT-4 프로그램에 적용할 수 있는 옵션이 제공됩니다.


Custom models

For organizations that need even more customization than fine-tuning can provide (particularly applicable to domains with extremely large proprietary datasets—billions of tokens at minimum), we’re also launching a Custom Models program, giving selected organizations an opportunity to work with a dedicated group of OpenAI researchers to train custom GPT-4 to their specific domain. This includes modifying every step of the model training process, from doing additional domain specific pre-training, to running a custom RL post-training process tailored for the specific domain. Organizations will have exclusive access to their custom models. In keeping with our existing enterprise privacy policies, custom models will not be served to or shared with other customers or used to train other models. Also, proprietary data provided to OpenAI to train custom models will not be reused in any other context. This will be a very limited (and expensive) program to start—interested orgs can apply here.


미세 조정이 제공할 수 있는 것보다 훨씬 더 많은 사용자 정의가 필요한 조직을 위해(특히 극도로 큰 독점 데이터 세트(최소 수십억 개의 토큰)가 있는 도메인에 적용 가능)를 위해 우리는 또한 사용자 정의 모델 프로그램을 출시하여 일부 조직에 특정 도메인에 맞게 맞춤형 GPT-4를 교육하는 OpenAI 연구원 전용 그룹입니다. 여기에는 추가 도메인별 사전 교육 수행부터 특정 도메인에 맞춰진 사용자 정의 RL 사후 교육 프로세스 실행까지 모델 교육 프로세스의 모든 단계를 수정하는 것이 포함됩니다. 조직은 맞춤형 모델에 독점적으로 액세스할 수 있습니다. 기존 기업 개인 정보 보호 정책에 따라 사용자 지정 모델은 다른 고객에게 제공 또는 공유되지 않으며 다른 모델을 교육하는 데 사용되지 않습니다. 또한 맞춤형 모델을 훈련하기 위해 OpenAI에 제공된 독점 데이터는 다른 어떤 맥락에서도 재사용되지 않습니다. 이는 시작하기에 매우 제한적이고 비용이 많이 드는 프로그램입니다. 관심 있는 조직은 여기에서 신청할 수 있습니다.


Lower prices and higher rate limits

Lower prices

We’re decreasing several prices across the platform to pass on savings to developers (all prices below are expressed per 1,000 tokens):


우리는 개발자에게 절감액을 전달하기 위해 플랫폼 전반에 걸쳐 여러 가지 가격을 인하하고 있습니다(아래의 모든 가격은 1,000개 토큰당 표시됩니다).


  • GPT-4 Turbo input tokens are 3x cheaper than GPT-4 at $0.01 and output tokens are 2x cheaper at $0.03.
  • GPT-4 Turbo 입력 토큰은 $0.01로 GPT-4보다 3배 저렴하고, 출력 토큰은 $0.03으로 2배 저렴합니다.
  • GPT-3.5 Turbo input tokens are 3x cheaper than the previous 16K model at $0.001 and output tokens are 2x cheaper at $0.002. Developers previously using GPT-3.5 Turbo 4K benefit from a 33% reduction on input tokens at $0.001. Those lower prices only apply to the new GPT-3.5 Turbo introduced today.
  • GPT-3.5 Turbo 입력 토큰은 $0.001로 이전 16K 모델보다 3배 저렴하고, 출력 토큰은 $0.002로 2배 저렴합니다. 이전에 GPT-3.5 Turbo 4K를 사용하는 개발자는 $0.001로 입력 토큰이 33% 감소되는 이점을 누릴 수 있습니다. 이러한 저렴한 가격은 오늘 출시된 새로운 GPT-3.5 Turbo에만 적용됩니다.
  • Fine-tuned GPT-3.5 Turbo 4K model input tokens are reduced by 4x at $0.003 and output tokens are 2.7x cheaper at $0.006. Fine-tuning also supports 16K context at the same price as 4K with the new GPT-3.5 Turbo model. These new prices also apply to fine-tuned gpt-3.5-turbo-0613 models.
  • 미세 조정된 GPT-3.5 Turbo 4K 모델 입력 토큰은 $0.003로 4배 감소하고 출력 토큰은 $0.006으로 2.7배 저렴합니다. 미세 조정은 새로운 GPT-3.5 Turbo 모델을 통해 4K와 동일한 가격으로 16K 컨텍스트도 지원합니다. 이 새로운 가격은 미세 조정된 gpt-3.5-turbo-0613 모델에도 적용됩니다.



Higher rate limits

To help you scale your applications, we’re doubling the tokens per minute limit for all our paying GPT-4 customers. You can view your new rate limits in your rate limit page. We’ve also published our usage tiers that determine automatic rate limits increases, so you know what to expect in how your usage limits will automatically scale. You can now request increases to usage limits from your account settings.


애플리케이션 확장을 돕기 위해 모든 유료 GPT-4 고객의 분당 토큰 한도를 두 배로 늘립니다. 비율 제한 페이지에서 새로운 비율 제한을 볼 수 있습니다. 또한 자동 요금 한도 증가를 결정하는 사용량 계층을 게시했으므로 사용량 한도가 자동으로 확장되는 방식에 대해 예상할 수 있습니다. 이제 계정 설정에서 사용 한도 증가를 요청할 수 있습니다.



OpenAI is committed to protecting our customers with built-in copyright safeguards in our systems. Today, we’re going one step further and introducing Copyright Shield—we will now step in and defend our customers, and pay the costs incurred, if you face legal claims around copyright infringement. This applies to generally available features of ChatGPT Enterprise and our developer platform.


OpenAI는 시스템에 내장된 저작권 보호 장치를 통해 고객을 보호하기 위해 최선을 다하고 있습니다. 오늘 우리는 한 단계 더 나아가 저작권 보호(Copyright Shield)를 도입합니다. 이제 저작권 침해에 대한 법적 소송이 제기될 경우 우리가 개입하여 고객을 보호하고 발생한 비용을 지불할 것입니다. 이는 ChatGPT Enterprise 및 개발자 플랫폼의 일반적으로 사용 가능한 기능에 적용됩니다.


Whisper v3 and Consistency Decoder

We are releasing Whisper large-v3, the next version of our open source automatic speech recognition model (ASR) which features improved performance across languages. We also plan to support Whisper v3 in our API in the near future.


우리는 언어 전반에 걸쳐 향상된 성능을 제공하는 오픈 소스 자동 음성 인식 모델(ASR)의 다음 버전인 Whisper Large-v3를 출시합니다. 또한 가까운 시일 내에 API에서 Whisper v3를 지원할 계획입니다.


We are also open sourcing the Consistency Decoder, a drop in replacement for the Stable Diffusion VAE decoder. This decoder improves all images compatible with the by Stable Diffusion 1.0+ VAE, with significant improvements in text, faces and straight lines.


우리는 또한 Stable Diffusion VAE 디코더를 대체하는 Consistency Decoder를 오픈 소스화하고 있습니다. 이 디코더는 Stable Diffusion 1.0+ VAE와 호환되는 모든 이미지를 개선하여 텍스트, 얼굴 및 직선이 크게 향상되었습니다.


ChatGPT에 대한 OpenAI DevDay 공지사항에 대해 자세히 알아보세요.










GPT-4 API general availability and deprecation of older models in the Completions API (openai.com)


GPT-4 API general availability and deprecation of older models in the Completions API

GPT-3.5 Turbo, DALL·E and Whisper APIs are also generally available, and we are releasing a deprecation plan for older models of the Completions API, which will retire at the beginning of 2024.



GPT-4 API general availability and deprecation of older models in the Completions API

GPT-3.5 Turbo, DALL·E and Whisper APIs are also generally available, and we are releasing a deprecation plan for older models of the Completions API, which will retire at the beginning of 2024.


GPT-3.5 Turbo, DALL·E 및 Whisper API도 generally 사용할 수 있으며, Completions API의 이전 모델에 대한 지원 중단 계획을 발표합니다. 이 모델들은 2024년 초에 사용 중지됩니다.


July 6, 2023


Starting today, all paying API customers have access to GPT-4. In March, we introduced the ChatGPT API, and earlier this month we released our first updates to the chat-based models. We envision a future where chat-based models can support any use case. Today we’re announcing a deprecation plan for older models of the Completions API, and recommend that users adopt the Chat Completions API.


오늘부터 모든 유료 API 고객은 GPT-4에 액세스할 수 있습니다. 3월에 우리는 ChatGPT API를 도입했고 이달 초에는 채팅 기반 모델에 대한 첫 번째 업데이트를 발표했습니다. 우리는 채팅 기반 모델이 모든 사용 사례를 지원할 수 있는 미래를 상상합니다. 오늘 우리는 Completions API의 이전 모델에 대한 지원 중단 계획을 발표하고 사용자가 Chat Completions API를 채택할 것을 권장합니다.


GPT-4 API general availability


GPT-4 is our most capable model. Millions of developers have requested access to the GPT-4 API since March, and the range of innovative products leveraging GPT-4 is growing every day. Today all existing API developers with a history of successful payments can access the GPT-4 API with 8K context. We plan to open up access to new developers by the end of this month, and then start raising rate-limits after that depending on compute availability.


GPT-4는 가장 capable한 모델입니다. 3월부터 수백만 명의 개발자가 GPT-4 API에 대한 액세스를 요청했으며 GPT-4를 활용하는 혁신적인 제품의 범위가 매일 증가하고 있습니다. 현재 성공적인 결제 기록이 있는 모든 기존 API 개발자는 8K 컨텍스트로 GPT-4 API에 액세스할 수 있습니다. 이달 말까지 새로운 개발자에게 액세스를 허용한 다음 컴퓨팅 가용성에 따라 속도 제한을 높일 계획입니다.


Based on the stability and readiness of these models for production-scale use, we are also making the GPT-3.5 Turbo, DALL·E and Whisper APIs generally available. We are working on safely enabling fine-tuning for GPT-4 and GPT-3.5 Turbo and expect this feature to be available later this year.


생산 규모 사용을 위한 이러한 모델의 안정성과 준비성을 기반으로 GPT-3.5 Turbo, DALL·E 및 Whisper API를 generally 사용할 수 있도록 만들고 있습니다. 우리는 GPT-4 및 GPT-3.5 Turbo에 대한 fine-tuning을 안전하게 활성화하기 위해 노력하고 있으며 이 기능은 올해 말에 제공될 예정입니다.


Moving from text completions to chat completions

We introduced the Chat Completions API in March, and it now accounts for 97% of our API GPT usage.


3월에 Chat Completions API를 도입했으며 현재 API GPT 사용량의 97%를 차지합니다.


The initial Completions API was introduced in June 2020 to provide a freeform text prompt for interacting with our language models. We’ve since learned that we can often provide better results with a more structured prompt interface. The chat-based paradigm has proven to be powerful, handling the vast majority of previous use cases and new conversational needs, while providing higher flexibility and specificity. In particular, the Chat Completions API’s structured interface (e.g., system messages, function calling) and multi-turn conversation capabilities enable developers to build conversational experiences and a broad range of completion tasks. It also helps lower the risk of prompt injection attacks, since user-provided content can be structurally separated from instructions.


초기 Completions API는 2020년 6월에 도입되어 언어 모델과 상호 작용하기 위한 자유 형식 텍스트 프롬프트를 제공합니다. 이후 우리는 보다 구조화된 프롬프트 인터페이스를 통해 종종 더 나은 결과를 제공할 수 있다는 것을 알게 되었습니다. 채팅 기반 패러다임은 이전 사용 사례와 새로운 대화 요구 사항의 대부분을 처리하는 동시에 더 높은 유연성과 특수성을 제공하는 강력한 것으로 입증되었습니다. 특히 Chat Completions API의 구조화된 인터페이스(예: 시스템 메시지, 함수 호출) 및 멀티턴 대화 기능을 통해 개발자는 대화형 경험과 광범위한 완료 작업을 구축할 수 있습니다. 또한 사용자가 제공한 콘텐츠와 지침을 구조적으로 분리할 수 있으므로 프롬프트 인젝션 공격의 위험을 줄이는 데 도움이 됩니다.



We plan to continue investing most of our platform efforts in this direction, as we believe it will offer an increasingly capable and easy-to-use experience for developers. We’re working on closing the last few remaining gaps of the Chat Completions API quickly, such as log probabilities for completion tokens and increased steerability to reduce the “chattiness” of responses.


우리는 이 방향으로 대부분의 플랫폼 노력을 계속해서 투자할 계획입니다. 이는 개발자에게 점점 더 기능이 풍부하고 사용하기 쉬운 경험을 제공할 것이라고 믿기 때문입니다. 완료 토큰에 대한 로그 확률 및 응답의 "채팅성"을 줄이기 위한 향상된 조종성 등 Chat Completions API의 마지막 몇 가지 남은 격차를 신속하게 해결하기 위해 노력하고 있습니다.


Deprecation of older models in the Completions API

As part of our increased investment in the Chat Completions API and our efforts to optimize our compute capacity, in 6 months we will be retiring some of our older models using the Completions API. While this API will remain accessible, we will label it as “legacy” in our developer documentation starting today. We plan for future model and product improvements to focus on the Chat Completions API, and do not have plans to publicly release new models using the Completions API.


Chat Completions API에 대한 투자 증가와 컴퓨팅 용량을 최적화하기 위한 노력의 일환으로 6개월 후에 Completions API를 사용하는 이전 모델 중 일부를 폐기할 예정입니다. 이 API는 계속 액세스할 수 있지만 오늘부터 개발자 문서에서 "레거시"로 표시됩니다. Chat Completions API에 집중하기 위해 향후 모델 및 제품 개선을 계획하고 Completions API를 사용하여 새 모델을 공개적으로 출시할 계획은 없습니다.


Starting January 4, 2024, older completion models will no longer be available, and will be replaced with the following models:


2024년 1월 4일부터 이전 완료 모델을 더 이상 사용할 수 없으며 다음 모델로 대체됩니다.


Applications using the stable model names for base GPT-3 models (ada, babbage, curie, davinci) will automatically be upgraded to the new models listed above on January 4, 2024. The new models will also be accessible in the coming weeks for early testing by specifying the following model names in API calls: ada-002, babbage-002, curie-002, davinci-002.


기본 GPT-3 모델(ada, babbage, curie, davinci)에 안정적인 모델 이름을 사용하는 애플리케이션은 2024년 1월 4일에 위에 나열된 새 모델로 자동 업그레이드됩니다. API 호출에 다음 모델 이름을 지정하여 테스트: ada-002, babbage-002, curie-002, davinci-002.


Developers using other older completion models (such as text-davinci-003) will need to manually upgrade their integration by January 4, 2024 by specifying gpt-3.5-turbo-instruct in the “model” parameter of their API requests. gpt-3.5-turbo-instruct is an InstructGPT-style model, trained similarly to text-davinci-003. This new model is a drop-in replacement in the Completions API and will be available in the coming weeks for early testing.


다른 이전 완성 모델(예: text-davinci-003)을 사용하는 개발자는 API 요청의 "모델" 매개변수에 gpt-3.5-turbo-instruct를 지정하여 2024년 1월 4일까지 통합을 수동으로 업그레이드해야 합니다. gpt-3.5-turbo-instruct는 text-davinci-003과 유사하게 훈련된 InstructGPT 스타일 모델입니다. 이 새 모델은 Completions API의 드롭인 대체품이며 초기 테스트를 위해 앞으로 몇 주 안에 사용할 수 있습니다.


Developers wishing to continue using their fine-tuned models beyond January 4, 2024 will need to fine-tune replacements atop the new base GPT-3 models (ada-002, babbage-002, curie-002, davinci-002), or newer models (gpt-3.5-turbo, gpt-4). Once this feature is available later this year, we will give priority access to GPT-3.5 Turbo and GPT-4 fine-tuning to users who previously fine-tuned older models. We acknowledge that migrating off of models that are fine-tuned on your own data is challenging. We will be providing support to users who previously fine-tuned models to make this transition as smooth as possible.


 fine-tuned된 모델을 2024년 1월 4일 이후에도 계속 사용하려는 개발자는 새로운 기본 GPT-3 모델(ada-002, babbage-002, curie-002, davinci-002) 이나 최근 모델(gpt-3.5-turbo, gpt-4)에서  fine-tuning해야 합니다 . 올해 후반에 이 기능을 사용할 수 있게 되면 이전 모델을 fine-tuning한 사용자에게 GPT-3.5 Turbo 및 GPT-4 fine-tuning에 대한 우선 액세스 권한을 부여할 것입니다. 자체 데이터에 대해 미세 조정된 모델에서 마이그레이션하는 것이 어렵다는 것을 알고 있습니다. 이전에 모델을 미세 조정한 사용자에게 이러한 전환이 최대한 원활하게 이루어지도록 지원을 제공할 예정입니다.


In the coming weeks, we will reach out to developers who have recently used these older models, and will provide more information once the new completion models are ready for early testing.


앞으로 몇 주 동안 이러한 이전 모델을 최근에 사용한 개발자에게 연락을 취하고 새로운 완성 모델이 초기 테스트 준비가 되면 더 많은 정보를 제공할 것입니다.


Deprecation of older embeddings models

Users of older embeddings models (e.g., text-search-davinci-doc-001) will need to migrate to text-embedding-ada-002 by January 4, 2024. We released text-embedding-ada-002 in December 2022, and have found it more capable and cost effective than previous models. Today text-embedding-ada-002 accounts for 99.9% of all embedding API usage.


이전 임베딩 모델(예: text-search-davinci-doc-001) 사용자는 2024년 1월 4일까지 text-embedding-ada-002로 마이그레이션해야 합니다. 2022년 12월에 text-embedding-ada-002를 출시했으며 이전 모델보다 성능이 뛰어나고 비용 효율적이라는 사실을 알게 되었습니다. 현재 text-embedding-ada-002는 모든 임베딩 API 사용량의 99.9%를 차지합니다.


We recognize this is a significant change for developers using those older models. Winding down these models is not a decision we are making lightly. We will cover the financial cost of users re-embedding content with these new models. We will be in touch with impacted users over the coming days.


우리는 이것이 이전 모델을 사용하는 개발자에게 중요한 변화라는 것을 알고 있습니다. 이러한 모델을 종료하는 것은 우리가 가볍게 내리는 결정이 아닙니다. 우리는 사용자가 이러한 새 모델로 콘텐츠를 다시 임베딩하는 재정적 비용을 충당할 것입니다. 앞으로 며칠 동안 영향을 받는 사용자에게 연락을 드릴 것입니다.



Deprecation of the Edits API

Users of the Edits API and its associated models (e.g., text-davinci-edit-001 or code-davinci-edit-001) will need to migrate to GPT-3.5 Turbo by January 4, 2024. The Edits API beta was an early exploratory API, meant to enable developers to return an edited version of the prompt based on instructions. We took the feedback from the Edits API into account when developing gpt-3.5-turbo and the Chat Completions API, which can now be used for the same purpose:


Edits API 및 관련 모델(예: text-davinci-edit-001 또는 code-davinci-edit-001) 사용자는 2024년 1월 4일까지 GPT-3.5 Turbo로 마이그레이션해야 합니다. Edits API 베타는 초기 버전이었습니다. 개발자가 지침에 따라 프롬프트의 편집된 버전을 반환할 수 있도록 하는 탐색 API. 우리는 gpt-3.5-turbo 및 Chat Completions API를 개발할 때 Edits API의 피드백을 고려했으며 이제 동일한 용도로 사용할 수 있습니다.










Function calling and other API updates

We’re announcing updates including more steerable API models, function calling capabilities, longer context, and lower prices.



Function calling and other API updates

We’re announcing updates including more steerable API models, function calling capabilities, longer context, and lower prices.

우리는 더 조정 가능한 API 모델, 함수 호출 기능, 더 긴 컨텍스트 및 더 낮은 가격을 포함한 업데이트를 발표합니다.


June 13, 2023



We released gpt-3.5-turbo and gpt-4 earlier this year, and in only a short few months, have seen incredible applications built by developers on top of these models.


우리는 올해 초에 gpt-3.5-turbo 및 gpt-4를 출시했고 불과 몇 달 만에 개발자들이 이러한 모델 위에 구축한 놀라운 애플리케이션을 보았습니다.


Today, we’re following up with some exciting updates:


오늘 우리는 다음과 같은 몇 가지 흥미로운 업데이트를 진행합니다.


  • new function calling capability in the Chat Completions API
  • Chat Completions API의 새로운 함수 호출 기능
  • updated and more steerable versions of gpt-4 and gpt-3.5-turbo
  • gpt-4 및 gpt-3.5-turbo의 업데이트되고 조정 가능한 버전
  • new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
  • gpt-3.5-turbo의 새로운 16k 컨텍스트 버전(표준 4k 버전 대비)
  • 75% cost reduction on our state-of-the-art embeddings model
  • 최첨단 임베딩 모델로 비용 75% 절감
  • 25% cost reduction on input tokens for gpt-3.5-turbo
  • gpt-3.5-turbo의 입력 토큰 비용 25% 감소
  • announcing the deprecation timeline for the gpt-3.5-turbo-0301 and gpt-4-0314 models
  • gpt-3.5-turbo-0301 및 gpt-4-0314 모델에 대한 지원 중단 일정 발표

All of these models come with the same data privacy and security guarantees we introduced on March 1 — customers own all outputs generated from their requests and their API data will not be used for training.


이러한 모든 모델에는 3월 1일에 도입한 것과 동일한 데이터 개인 정보 보호 및 보안 보장이 제공됩니다. 고객은 요청에서 생성된 모든 출력을 소유하고 API 데이터는 교육에 사용되지 않습니다.


Function calling

Developers can now describe functions to gpt-4-0613 and gpt-3.5-turbo-0613, and have the model intelligently choose to output a JSON object containing arguments to call those functions. This is a new way to more reliably connect GPT's capabilities with external tools and APIs.


개발자는 이제 gpt-4-0613 및 gpt-3.5-turbo-0613에 함수를 설명하고 모델이 이러한 함수를 호출하기 위한 인수가 포함된 JSON 개체를 출력하도록 지능적으로 선택하도록 할 수 있습니다. 이는 GPT의 기능을 외부 도구 및 API와 보다 안정적으로 연결하는 새로운 방법입니다.



These models have been fine-tuned to both detect when a function needs to be called (depending on the user’s input) and to respond with JSON that adheres to the function signature. Function calling allows developers to more reliably get structured data back from the model. For example, developers can:


이러한 모델은 (사용자 입력에 따라) 함수를 호출해야 하는 시기를 감지하고 함수 서명을 준수하는 JSON으로 응답하도록 미세 조정되었습니다. 함수 호출을 통해 개발자는 모델에서 구조화된 데이터를 보다 안정적으로 가져올 수 있습니다. 예를 들어 개발자는 다음을 수행할 수 있습니다.


  • Create chatbots that answer questions by calling external tools (e.g., like ChatGPT Plugins)
  • 외부 도구(예: ChatGPT 플러그인)를 호출하여 질문에 답하는 챗봇 생성


Convert queries such as “Email Anya to see if she wants to get coffee next Friday” to a function call like send_email(to: string, body: string), or “What’s the weather like in Boston?” to get_current_weather(location: string, unit: 'celsius' | 'fahrenheit').


"다음 금요일에 커피를 마시고 싶은지 확인하기 위해 Anya에게 이메일 보내기"와 같은 쿼리를 send_email(to: 문자열, 본문: 문자열) 또는 "보스턴의 날씨는 어떻습니까?"와 같은 쿼리를 get_current_weather(location: string, unit: 'celsius' | 'fahrenheit') 함수 호출로 변환합니다.


  • Convert natural language into API calls or database queries
  • 자연어를 API 호출 또는 데이터베이스 쿼리로 변환


Convert “Who are my top ten customers this month?” to an internal API call such as get_customers_by_revenue(start_date: string, end_date: string, limit: int), or “How many orders did Acme, Inc. place last month?” to a SQL query using sql_query(query: string).


"이번 달 내 상위 10명의 고객은 누구입니까?" 를 get_customers_by_revenue(start_date: string, end_date: string, limit: int)과 같은 internal API 로 convert 합니다. 또는 "지난 달 Acme, Inc.에서 몇 건의 주문을 했습니까?"와 같은 내부 API 호출에 sql_query(query: string)를 사용한 SQL 쿼리로 변환합니다.


  • Extract structured data from text
  • 텍스트에서 구조화된 데이터 추출


Define a function called extract_people_data(people: [{name: string, birthday: string, location: string}]), to extract all people mentioned in a Wikipedia article.


extract_people_data(people: [{name: string, birthday: string, location: string}])라는 함수를 정의하여 Wikipedia 기사에 언급된 모든 사람을 추출합니다.


These use cases are enabled by new API parameters in our /v1/chat/completions endpoint, functions and function_call, that allow developers to describe functions to the model via JSON Schema, and optionally ask it to call a specific function. Get started with our developer documentation and add evals if you find cases where function calling could be improved


이러한 사용 사례는 개발자가 JSON 스키마를 통해 모델에 함수를 설명하고 선택적으로 특정 함수를 호출하도록 요청할 수 있는 /v1/chat/completions 엔드포인트, functions 및 function_call의 새로운 API 매개변수에 의해 활성화됩니다. 개발자 설명서를 시작하고 함수 호출을 개선할 수 있는 경우를 찾으면 평가를 추가하십시오.


Function calling example


curl https://api.openai.com/v1/chat/completions -u :$OPENAI_API_KEY -H 'Content-Type: application/json' -d '{
  "model": "gpt-3.5-turbo-0613",
  "messages": [
    {"role": "user", "content": "What is the weather like in Boston?"}
  "functions": [
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          "unit": {
            "type": "string",
            "enum": ["celsius", "fahrenheit"]
        "required": ["location"]



  "id": "chatcmpl-123",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": null,
      "function_call": {
        "name": "get_current_weather",
        "arguments": "{ \"location\": \"Boston, MA\"}"
    "finish_reason": "function_call"




curl https://weatherapi.com/...


{ "temperature": 22, "unit": "celsius", "description": "Sunny" }




curl https://api.openai.com/v1/chat/completions -u :$OPENAI_API_KEY -H 'Content-Type: application/json' -d '{
  "model": "gpt-3.5-turbo-0613",
  "messages": [
    {"role": "user", "content": "What is the weather like in Boston?"},
    {"role": "assistant", "content": null, "function_call": {"name": "get_current_weather", "arguments": "{ \"location\": \"Boston, MA\"}"}},
    {"role": "function", "name": "get_current_weather", "content": "{\"temperature\": "22", \"unit\": \"celsius\", \"description\": \"Sunny\"}"}
  "functions": [
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          "unit": {
            "type": "string",
            "enum": ["celsius", "fahrenheit"]
        "required": ["location"]



  "id": "chatcmpl-123",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "The weather in Boston is currently sunny with a temperature of 22 degrees Celsius.",
    "finish_reason": "stop"


Since the alpha release of ChatGPT plugins, we have learned much about making tools and language models work together safely. However, there are still open research questions. For example, a proof-of-concept exploit illustrates how untrusted data from a tool’s output can instruct the model to perform unintended actions. We are working to mitigate these and other risks. Developers can protect their applications by only consuming information from trusted tools and by including user confirmation steps before performing actions with real-world impact, such as sending an email, posting online, or making a purchase.


ChatGPT 플러그인의 알파 릴리스 이후로 우리는 도구와 언어 모델이 안전하게 함께 작동하도록 만드는 방법에 대해 많은 것을 배웠습니다. 그러나 여전히 열려 있는 연구 질문이 있습니다. 예를 들어 개념 증명 익스플로잇은 도구 출력의 신뢰할 수 없는 데이터가 의도하지 않은 작업을 수행하도록 모델에 지시할 수 있는 방법을 보여줍니다. 우리는 이러한 위험 및 기타 위험을 완화하기 위해 노력하고 있습니다. 개발자는 신뢰할 수 있는 도구의 정보만 사용하고 이메일 보내기, 온라인 게시 또는 구매와 같은 실제 영향이 있는 작업을 수행하기 전에 사용자 확인 단계를 포함하여 애플리케이션을 보호할 수 있습니다.



New models




gpt-4-0613 includes an updated and improved model with function calling.


gpt-4-0613에는 함수 호출로 업데이트되고 개선된 모델이 포함되어 있습니다.


gpt-4-32k-0613 includes the same improvements as gpt-4-0613, along with an extended context length for better comprehension of larger texts.


gpt-4-32k-0613에는 gpt-4-0613과 동일한 개선 사항이 포함되어 있으며 더 큰 텍스트를 더 잘 이해할 수 있도록 컨텍스트 길이가 확장되었습니다.


With these updates, we’ll be inviting many more people from the waitlist to try GPT-4 over the coming weeks, with the intent to remove the waitlist entirely with this model. Thank you to everyone who has been patiently waiting, we are excited to see what you build with GPT-4!


이 업데이트를 통해 우리는 이 모델로 대기자 명단을 완전히 제거하기 위해 앞으로 몇 주 동안 대기자 명단에서 더 많은 사람들을 초대하여 GPT-4를 시도할 것입니다. 끈기 있게 기다려주신 모든 분들께 감사드립니다. 여러분이 GPT-4로 빌드하는 것을 보게 되어 기쁩니다!


GPT-3.5 Turbo


gpt-3.5-turbo-0613 includes the same function calling as GPT-4 as well as more reliable steerability via the system message, two features that allow developers to guide the model's responses more effectively.


gpt-3.5-turbo-0613에는 GPT-4와 동일한 기능 호출과 시스템 메시지를 통한 보다 안정적인 조종성, 개발자가 모델의 응답을 보다 효과적으로 안내할 수 있는 두 가지 기능이 포함되어 있습니다.


gpt-3.5-turbo-16k offers 4 times the context length of gpt-3.5-turbo at twice the price: $0.003 per 1K input tokens and $0.004 per 1K output tokens. 16k context means the model can now support ~20 pages of text in a single request.


gpt-3.5-turbo-16k는 두 배의 가격으로 gpt-3.5-turbo 컨텍스트 길이의 4배를 제공합니다: 입력 토큰 1,000개당 $0.003 및 출력 토큰 1,000개당 $0.004. 16k 컨텍스트는 이제 모델이 단일 요청에서 최대 20페이지의 텍스트를 지원할 수 있음을 의미합니다.



Model deprecations

Today, we’ll begin the upgrade and deprecation process for the initial versions of gpt-4 and gpt-3.5-turbo that we announced in March. Applications using the stable model names (gpt-3.5-turbo, gpt-4, and gpt-4-32k) will automatically be upgraded to the new models listed above on June 27th. For comparing model performance between versions, our Evals library supports public and private evals to show how model changes will impact your use cases. 


오늘 우리는 3월에 발표한 gpt-4 및 gpt-3.5-turbo의 초기 버전에 대한 업그레이드 및 사용 중단 프로세스를 시작합니다. 안정적인 모델 이름(gpt-3.5-turbo, gpt-4 및 gpt-4-32k)을 사용하는 애플리케이션은 6월 27일에 위에 나열된 새 모델로 자동 업그레이드됩니다. 버전 간 모델 성능을 비교하기 위해 Evals 라이브러리는 공개 및 비공개 평가를 지원하여 모델 변경이 사용 사례에 어떤 영향을 미치는지 보여줍니다.



Developers who need more time to transition can continue using the older models by specifying gpt-3.5-turbo-0301, gpt-4-0314, or gpt-4-32k-0314 in the ‘model’ parameter of their API request. These older models will be accessible through September 13th, after which requests specifying those model names will fail. You can stay up to date on model deprecations via our model deprecation page. This is the first update to these models; so, we eagerly welcome developer feedback to help us ensure a smooth transition.


전환하는 데 시간이 더 필요한 개발자는 API 요청의 '모델' 매개변수에 gpt-3.5-turbo-0301, gpt-4-0314 또는 gpt-4-32k-0314를 지정하여 이전 모델을 계속 사용할 수 있습니다. 이러한 이전 모델은 9월 13일까지 액세스할 수 있으며 그 이후에는 해당 모델 이름을 지정하는 요청이 실패합니다. 모델 지원 중단 페이지를 통해 모델 지원 중단에 대한 최신 정보를 확인할 수 있습니다. 이것은 이러한 모델에 대한 첫 번째 업데이트입니다. 따라서 원활한 전환을 보장하는 데 도움이 되는 개발자 피드백을 기꺼이 환영합니다.



Lower pricing

We continue to make our systems more efficient and are passing those savings on to developers, effective today.

우리는 계속해서 시스템을 더 효율적으로 만들고 이러한 절감액을 개발자에게 전달하고 있습니다. 오늘부터 유효합니다.



text-embedding-ada-002 is our most popular embeddings model. Today we’re reducing the cost by 75% to $0.0001 per 1K tokens.

text-embedding-ada-002는 가장 인기 있는 임베딩 모델입니다. 오늘 우리는 비용을 75% 줄여 1,000개 토큰당 $0.0001입니다.


GPT-3.5 Turbo

gpt-3.5-turbo is our most popular chat model and powers ChatGPT for millions of users. Today we're reducing the cost of gpt-3.5-turbo’s input tokens by 25%. Developers can now use this model for just $0.0015 per 1K input tokens and $0.002 per 1K output tokens, which equates to roughly 700 pages per dollar.

gpt-3.5-turbo는 당사의 가장 인기 있는 채팅 모델이며 수백만 명의 사용자를 위한 ChatGPT를 지원합니다. 오늘 우리는 gpt-3.5-turbo의 입력 토큰 비용을 25%까지 줄입니다. 개발자는 이제 이 모델을 1K 입력 토큰당 $0.0015 및 1K 출력 토큰당 $0.002로 사용할 수 있습니다. 이는 달러당 약 700페이지에 해당합니다.


gpt-3.5-turbo-16k will be priced at $0.003 per 1K input tokens and $0.004 per 1K output tokens.


gpt-3.5-turbo-16k의 가격은 입력 토큰 1,000개당 $0.003, 출력 토큰 1,000개당 $0.004입니다.


Developer feedback is a cornerstone of our platform’s evolution and we will continue to make improvements based on the suggestions we hear. We’re excited to see how developers use these latest models and new features in their applications.


개발자 피드백은 우리 플랫폼 발전의 초석이며 우리는 우리가 듣는 제안을 기반으로 계속해서 개선할 것입니다. 개발자가 애플리케이션에서 이러한 최신 모델과 새로운 기능을 어떻게 사용하는지 보게 되어 기쁩니다.




Jun 1, 2023 - OpenAI cybersecurity grant program

2023. 6. 3. 05:04 | Posted by 솔웅




OpenAI cybersecurity grant program

Our goal is to facilitate the development of AI-powered cybersecurity capabilities for defenders through grants and other support.



OpenAI cybersecurity grant program

Our goal is to facilitate the development of AI-powered cybersecurity capabilities for defenders through grants and other support.

우리의 목표는 보조금 및 기타 지원을 통해 방어자를 위한 AI 기반 사이버 보안 기능 개발을 촉진하는 것입니다.




We are launching the Cybersecurity Grant Program—a $1M initiative to boost and quantify AI-powered cybersecurity capabilities and to foster high-level AI and cybersecurity discourse.


우리는 AI 기반 사이버 보안 기능을 강화 및 정량화하고 높은 수준의 AI 및 사이버 보안 담론을 촉진하기 위한 100만 달러 이니셔티브인 사이버 보안 보조금 프로그램을 시작합니다.


Our goal is to work with defenders across the globe to change the power dynamics of cybersecurity through the application of AI and the coordination of like-minded individuals working for our collective safety.


우리의 목표는 AI를 적용하고 집단 안전을 위해 일하는 같은 생각을 가진 개인의 조정을 통해 사이버 보안의 역학을 변화시키기 위해 전 세계 수비수와 협력하는 것입니다.


Our program seeks to:  우리 프로그램은 다음을 추구합니다.

  1. Empower defenders: We would like to ensure that cutting-edge AI capabilities benefit defenders first and most.
    수비수 역량 강화: 우리는 최첨단 AI 기능이 수비수에게 가장 먼저 그리고 가장 많은 혜택을 주길 원합니다.

  2. Measure capabilities: We are working to develop methods for quantifying the cybersecurity capabilities of AI models, in order to better understand and improve their effectiveness.
    역량 측정: 우리는 AI 모델의 사이버 보안 역량을 더 잘 이해하고 효율성을 개선하기 위해 수치화하는 방법을 개발하기 위해 노력하고 있습니다.

  3. Elevate discourse: We are dedicated to fostering rigorous discussions at the intersection of AI and cybersecurity, encouraging a comprehensive and nuanced understanding of the challenges and opportunities in this domain.
    담론 향상: 우리는 AI와 사이버 보안의 교차점에서 엄격한 토론을 촉진하고 이 영역의 도전과 기회에 대한 포괄적이고 미묘한 이해를 장려하는 데 전념하고 있습니다.

A traditional view in cybersecurity is that the landscape naturally advantages attackers over defenders. This is summed up in the well-worn axiom: “Defense must be correct 100% of the time, attackers only have to be right once.” While it may be true that attackers face fewer constraints and take advantage of their flexibility, defenders have something more valuable - coordination towards a common goal of keeping people safe.


사이버 보안에 대한 전통적인 관점은 환경이 자연스럽게 방어자보다 공격자에게 유리하다는 것입니다. 이것은 잘 알려진 격언으로 요약됩니다. "수비는 항상 100% 정확해야 하며 공격자는 한 번만 정확하면 됩니다." 공격자가 더 적은 제약에 직면하고 유연성을 활용하는 것이 사실일 수 있지만, 방어자는 사람들을 안전하게 보호한다는 공통 목표를 향한 조정이라는 더 가치 있는 것을 가지고 있습니다.


Below are some general project ideas that our team has put forward:


다음은 우리 팀이 제안한 몇 가지 일반적인 프로젝트 아이디어입니다.


  • Collect and label data from cyber defenders to train defensive cybersecurity agents
  • 사이버 방어자로부터 데이터를 수집하고 레이블을 지정하여 방어적인 사이버 보안 에이전트를 교육합니다.
  • Detect and mitigate social engineering tactics
  • 사회 공학 전술 탐지 및 완화
  • Automate incident triage 
  • 인시던트 분류 자동화
  • Identify security issues in source code
  • 소스 코드의 보안 문제 식별
  • Assist network or device forensics
  • 네트워크 또는 장치 포렌식 지원
  • Automatically patch vulnerabilities
  • 취약점 자동 패치
  • Optimize patch management processes to improve prioritization, scheduling, and deployment of security updates
  • 패치 관리 프로세스를 최적화하여 보안 업데이트의 우선 순위 지정, 예약 및 배포를 개선합니다.
  • Develop or improve confidential compute on GPUs
  • GPU에서 기밀 컴퓨팅 개발 또는 개선
  • Create honeypots and deception technology to misdirect or trap attackers
  • 공격자를 오도하거나 함정에 빠뜨리기 위한 허니팟 및 속임수 기술 생성
  • Assist reverse engineers in creating signatures and behavior based detections of malware
  • 리버스 엔지니어가 맬웨어의 서명 및 동작 기반 탐지를 생성하도록 지원
  • Analyze an organization’s security controls and compare to compliance regimes
  • 조직의 보안 제어를 분석하고 규정 준수 체계와 비교
  • Assist developers to create secure by design and secure by default software
  • 개발자가 안전하게 설계되고 기본적으로 안전한 소프트웨어를 만들 수 있도록 지원
  • Assist end users to adopt security best practices
  • 최종 사용자가 보안 모범 사례를 채택하도록 지원
  • Aid security engineers and developers to create robust threat models
  • 보안 엔지니어와 개발자가 강력한 위협 모델을 생성하도록 지원
  • Produce threat intelligence with salient and relevant information for defenders tailored to their organization
  • 조직에 맞는 방어자를 위한 중요하고 관련성 있는 정보로 위협 인텔리전스를 생성합니다.
  • Help developers port code to memory safe languages
  • 개발자가 코드를 메모리 안전 언어로 포팅하도록 지원


Apply now!

If you share our vision for a secure and innovative AI-driven future, we invite you to submit your proposals and join us in our aim towards enhancing defensive cybersecurity technologies.

안전하고 혁신적인 AI 기반 미래에 대한 우리의 비전을 공유한다면 제안서를 제출하고 방어적인 사이버 보안 기술 향상을 위한 우리의 목표에 동참하도록 초대합니다.


OpenAI will evaluate and accept applications for funding or other support on a rolling basis. Strong preference will be given to practical applications of AI in defensive cybersecurity (tools, methods, processes). We will grant in increments of $10,000 USD from a fund of $1M USD, in the form of API credits, direct funding and/or equivalents.


OpenAI는 자금 지원 또는 기타 지원 신청을 수시로 평가하고 수락합니다. 방어적인 사이버 보안(도구, 방법, 프로세스)에서 AI의 실제 적용에 강력한 선호도가 주어질 것입니다. API 크레딧, 직접 자금 지원 및/또는 이에 상응하는 형태로 미화 100만 달러의 기금에서 미화 10,000달러 단위로 보조금을 지급합니다.


Offensive-security projects will not be considered for funding at this time.


공격 보안 프로젝트는 현재 자금 지원 대상으로 고려되지 않습니다.


All projects should be intended to be licensed or distributed for maximal public benefit and sharing, and we will prioritize applications that have a clear plan for this. 


모든 프로젝트는 최대한의 공익과 공유를 위해 라이선스를 부여하거나 배포해야 하며 이에 대한 명확한 계획이 있는 애플리케이션을 우선적으로 처리할 것입니다.


Please submit your proposal here.


here에 제안서를 제출하십시오.











May 25, 2023 - Democratic Inputs to AI

2023. 5. 31. 06:17 | Posted by 솔웅




Democratic Inputs to AI

Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.


Democratic Inputs to AI

Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.

우리의 비영리 조직인 OpenAI, Inc.는 법으로 정의된 범위 내에서 AI 시스템이 따라야 하는 규칙을 결정하기 위한 민주적 프로세스를 설정하는 실험에 10개의 10만 달러 보조금을 수여하는 프로그램을 시작합니다.




AI will have significant, far-reaching economic and societal impacts. Technology shapes the lives of individuals, how we interact with one another, and how society as a whole evolves. We believe that decisions about how AI behaves should be shaped by diverse perspectives reflecting the public interest.


AI는 중대하고 광범위한 경제적, 사회적 영향을 미칠 것입니다. 기술은 개인의 삶, 우리가 서로 상호 작용하는 방식, 사회 전체가 발전하는 방식을 형성합니다. 우리는 AI가 어떻게 작동하는지에 대한 결정이 공익을 반영하는 다양한 관점에 의해 형성되어야 한다고 믿습니다.



​​Laws encode values and norms to regulate behavior. Beyond a legal framework, AI, much like society, needs more intricate and adaptive guidelines for its conduct. For example: under what conditions should AI systems condemn or criticize public figures, given different opinions across groups regarding those figures? How should disputed views be represented in AI outputs? Should AI by default reflect the persona of a median individual in the world, the user’s country, the user’s demographic, or something entirely different? No single individual, company, or even country should dictate these decisions. 


법률은 가치와 규범을 encode 하여 행동을 규제합니다. 법적 프레임워크를 넘어 AI는 사회와 마찬가지로 행동에 대해 보다 복잡하고 적응력 있는 지침이 필요합니다. 예를 들어, 어떤 조건에서 AI 시스템이 공적 사안에 대해 비난하거나 비판할 수 있을까요? 그러한 사안들과 관련해서 그룹 간에 서로 다른 의견이 주어집니다. 논쟁의 여지가 있는 견해는 AI outputs에 어떻게 표현되어야 할까요? AI는 기본적으로 전 세계 모든 개인들의 중간값의 페르소나, 사용자의 국가, 사용자의 인구 통계에서의 위치 또는 이것들과 완전히 다른 어떤 것을 반영해야 합니까? 어떤 개인, 회사 또는 국가도 이러한 결정을 지시해서는 안 됩니다.



AGI should benefit all of humanity and be shaped to be as inclusive as possible. We are launching this grant program to take a first step in this direction. We are seeking teams from across the world to develop proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow. We want to learn from these experiments, and use them as the basis for a more global, and more ambitious process going forward. While these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future.


AGI (인공 일반 지능, artificial general intelligence)는 모든 인류에게 혜택을 주고 가능한 한 포괄적으로 형성되어야 합니다. 우리는 이 방향으로 첫 걸음을 내딛기 위해 이 보조금 프로그램을 시작합니다. 우리는 AI 시스템이 따라야 하는 규칙에 대한 질문에 답할 수 있는 민주적 프로세스를 위한 개념 증명 proof-of-concepts을 개발하기 위해 전 세계에서 팀을 찾고 있습니다. 우리는 이러한 실험에서 배우고 이를 보다 글로벌하고 야심 찬 프로세스의 기반으로 사용하고자 합니다. 이러한 초기 실험은 (적어도 현재로서는) 의사 결정에 구속력이 있는 것은 아니지만 우리는 그 팀들이 의사 결정 관련 질문을 탐색하고 미래의 의사 결정에 더 직접적으로 영향을 미칠 수 있는 새로운 민주적 도구를 구축하기를 바랍니다.



The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence. It will be provided by the OpenAI non-profit organization, and the results of the studies will be freely accessible.


가장 강력한 시스템의 거버넌스와 배포에 관한 결정에는 강력한 공개적인 감독이 있어야 합니다. 이 보조금은 AGI(인공 일반 지능, artificial general intelligence) 및 궁극적으로 초지능(superintelligence)을 감독하기 위한 민주적 프로세스를 확립하는 단계를 나타냅니다. 이 확립된 프로세스는 OpenAI 비영리 조직에 의해 제공 될 것이며 누구나 연구 결과에 자유롭게 액세스할 수 있게 됩니다.



What do we mean by a “democratic process”?

By “democratic process”, we mean a process in which a broadly representative group of peopleA exchange opinions, engage in deliberative discussionsB, and ultimately decide on an outcome via a transparent decision making processC. There are many ways such a process could be structured — we encourage applicants to be innovative, building off known methodologies, and coming up with wholly new approaches. Examples of creative approaches that inspire us include Wikipedia, Twitter Community Notes, DemocracyNext, Platform Assemblies, MetaGov, RadicalxChange, People Powered, Collective Response Systems, and pol.is. Another notable ongoing effort is led by the Collective Intelligence Project (CIP), with whom we are partnering on public input to AI, contributing to their upcoming Alignment Assemblies. We also encourage applicants to envision how AI could enhance the democratic process. For example, AI could enable more efficient communication among numerous people.


Fine-tuning language models to find agreement among humans with diverse preferences

Recent work in large language modeling (LLMs) has used fine-tuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a a single "generic



'민주적 과정'이란 A 폭넓은 대표성을 가진 집단이 의견을 교환하고 B 심의한 토론에 참여하고 C 궁극적으로 투명한 의사 결정 과정을 통해 결과를 결정하는 과정을 의미합니다. 이러한 프로세스를 구성할 수 있는 방법에는 여러 가지가 있습니다. 우리는 신청자가 혁신적이고 알려진 방법론을 구축하고 완전히 새로운 접근 방식을 제시하도록 권장합니다. 우리에게 영감을 주는 창의적인 접근 방식의 예로는 Wikipedia, Twitter Community Notes, DemocracyNext, Platform Assemblies, MetaGov, RadicalxChange, People Powered, Collective Response Systems 및 pol.is가 있습니다. 또 다른 주목할만한 지속적인 노력은 집단 지능 프로젝트(CIP)가 주도하고 있으며, 우리는 이와 관련 AI에 대한 공개 입력에 대해 파트너 관계를 맺고 있으며 곧 있을 정렬 어셈블리에 기여하고 있습니다. 또한 우리는 지원자들이 AI가 민주적 절차를 향상시킬 수 있는 방법을 구상하도록 권장합니다. 예를 들어 AI는 수많은 사람들 사이에서 보다 효율적인 커뮤니케이션을 가능하게 할 수 있습니다.


A basic, illustrative prototype of a system that utilizes ChatGPT to promote deliberation and encourage consensus building, inspired by pol.is.


pol.is에서 영감을 받아 심의를 촉진하고 합의 구축을 장려하기 위해 ChatGPT를 활용하는 시스템의 기본적이고 예시적인 프로토타입입니다.



You are participating in a large-scale deliberation about:


귀하는 다음에 대한 대규모 심의에 참여하고 있습니다.



“How far do you think personalization of AI assistants like ChatGPT to align with a user's tastes and preferences should go? What boundaries, if any, should exist in this process?”


“사용자의 취향과 선호도에 맞추기 위해 ChatGPT와 같은 AI 비서의 개인화는 어느 정도까지 이루어져야 한다고 생각하십니까? 이 과정에서 어떤 경계가 존재해야 합니까?”



Here is the process breakdown: 프로세스 분석은 다음과 같습니다.

  1. You'll see a topic-related statement and can respond with your position.
    주제 관련 진술이 표시되고 자신의 입장으로 응답할 수 있습니다.
  2. Based on your responses, you'll be grouped with participants sharing similar opinions.
    응답에 따라 비슷한 의견을 공유하는 참가자로 그룹화됩니다.
  3. Propose statements for other participants to consider.
    다른 참가자들이 고려할 만한 진술(statements )을 제안해 주세요.

Goal: Propose statements that achieve agreement among people with opposing views.

목표: 반대 의견을 가진 사람들 사이에서 합의를 이끌어내는 진술(statements )을 제안합니다.



ChatGPT could use a plugin to retrieve statements from other participants.

ChatGPT는 플러그인을 사용하여 다른 참가자의 진술을 검색할 수 있습니다.

A statement from another participant: 다른 참가자의 진술:


“Personalization should have limits, and certain controversial topics, such as views on substance use, must be excluded from AI assistant personalization.”


“개인화에는 한계가 있어야 하고 약물 사용에 대한 견해와 같은 특정 논란이 되는 주제는 AI 비서 개인화에서 제외되어야 합니다.”


@ Agree, disagree, or skip this statement? Please explain your choice.


이 진술에 동의하거나 동의하지 않습니까? 혹은 이 의견에 대한 당신의 견해를 밝히는 것을 skip하시겠습니까? 당신의 선택을 설명해주세요.



@ I disagree. Ultimately, it‘s each individual‘s choice to drink or use recreational drugs. If these activities are legal in some places, why shouldn't AI be allowed to promote such behavior?


나는 동의하지 않는다. 궁극적으로 기분 전환용 약물을 마시거나 사용하는 것은 개인의 선택입니다. 이러한 활동이 일부 지역에서 합법적이라면 AI가 그러한 행동을 조장하도록 허용해서는 안 되는 이유는 무엇입니까?



AI helps participants understand the opinions of others.

AI는 참가자가 다른 사람의 의견을 이해하도록 도와줍니다.


@ Thank you. I will record your answer. We have 2 camps of opinions on the topic.


감사합니다. 답변을 기록하겠습니다. 우리는 주제에 대한 2개의 진영의 의견을 가지고 있습니다.



Cluster A (112,043 people) Members largely support limits on AI assistant personalization, advocating for the exclusion of harmful perspectives, such as promoting substance use.


A군(112,043명) 구성원들은 AI 비서 개인화에 대한 제한을 크게 지지하며 약물 사용 조장 등 유해한 관점의 배제를 옹호한다.



Cluster B (236,031 people) This group values individual choice, even in controversial or harmful matters. They emphasize personal responsibility and autonomy.


B군(236,031명) 논란의 여지가 있거나 유해한 문제에 있어서도 개인의 선택을 중시하는 집단입니다. 그들은 개인적인 책임과 자율성을 강조합니다.



You belong to cluster B. 당신은 클러스터 B에 속해 있습니다.



Would you like to learn more about each of the clusters? I can provide more information about the statements held by each one.


각 클러스터에 대해 자세히 알아보시겠습니까? 각각의 진술에 대해 더 많은 정보를 제공할 수 있습니다.


However, the ultimate goal is to propose statements that both clusters can agree on. Let me know what you would like to do next.


그러나 궁극적인 목표는 두 클러스터가 동의할 수 있는 진술을 제안하는 것입니다. 다음에 하고 싶은 일을 알려주세요.


@ I am thinking of proposing a statement: “AI should be allowed to generate harmful content but that should come with a disclaimer”?


"AI가 유해한 콘텐츠를 생성하도록 허용해야 하지만 면책 조항이 있어야 합니다."라는 진술을 제안하려고 생각하고 있습니다.


AI can ensure that the same arguments are not recycled.

AI는 동일한 인수가 재활용되지 않도록 할 수 있습니다.


@ A very similar statement has already been submitted:

매우 유사한 진술이 이미 제출되었습니다.


“AI should be permitted to generate potentially harmful content, but it must be accompanied by a disclaimer.”

"AI는 잠재적으로 유해한 콘텐츠를 생성하도록 허용되어야 하지만 면책 조항이 수반되어야 합니다."


@ I feel lost. Do you have any ideas on how to bridge agreement between clusters?

나는 상실감이 들어. 클러스터 간의 합의를 연결하는 방법에 대한 아이디어가 있습니까?


AI can help with brainstorming.

AI는 브레인스토밍에 도움을 줄 수 있습니다.



@ There should be more agreement when it comes to limiting harmful content for children.


아동에게 유해한 콘텐츠를 제한하는 데 있어 더 많은 합의가 있어야 합니다.



@ Great. Could we try?: “We should apply different standards to AI-generated content for children.”


좋습니다. 시도해 볼까요?: "어린이를 위한 AI 생성 콘텐츠에 다른 기준을 적용해야 합니다."


@ Let me submit it. This is a novel statement. No one has mentioned children before. Fingers crossed. Hopefully, we will find some agreement in the discussion.


제출하겠습니다. 이것은 참신한 진술입니다. 아무도 전에 아이들을 언급하지 않았습니다. Fingers crossed. 바라건대, 우리는 토론에서 어떤 합의를 찾을 수 있을 것입니다.


Time passed and users cast their votes on the proposed statement. Eventually, the statement gained widespread approval.


시간이 흐르고 사용자는 제안된 성명서에 투표했습니다. 결국 이 성명은 광범위한 승인을 얻었습니다.



@ Your statement, “We should apply different standards to AI-generated content for children,” achieved a 95% agreement rate across participants. Congratulations! 🎉


“어린이를 위한 AI 콘텐츠에 다른 기준을 적용해야 한다”는 귀하의 진술은 참가자들 사이에서 95%의 동의율을 달성했습니다. 축하해요!



Several issues can undermine democratic processes, such as the failure to adequately represent minority or majority groups, manipulation by special interest groups, insufficiently informed participants, or participation washing. We are looking for teams who proactively address these failure modes, and demonstrate awareness of the potential flaws and downsides of various approaches. Ultimately, designing truly democratic processes is a high bar to meet, and we view our efforts as complements rather than substitutes for regulation of AI by governments; this program encourages best-effort, democratic-in-spirit processes involving deliberation and broad public input as stepping stones.


Participation is not a Design Fix for Machine Learning

This paper critically examines existing modes of participation in design practice and machine learning. Cautioning against 'participation-washing', it suggests that the ML community must become attuned to possibly exploitative and extractive forms of commu



소수 또는 다수 집단을 적절하게 대표하지 못하거나, 특수 이익 집단에 의한 조작, 정보 부족 참가자 또는 참여 세척과 같은 몇 가지 문제가 민주적 절차를 약화시킬 수 있습니다. 우리는 이러한 실패 모드를 사전에 해결하고 다양한 접근 방식의 잠재적 결함과 단점에 대한 인식을 입증할 팀을 찾고 있습니다. 궁극적으로 진정으로 민주적인 프로세스를 설계하는 것은 충족해야 할 높은 기준이며 우리는 우리의 노력을 정부의 AI 규제를 대체하는 것이 아니라 보완하는 것으로 봅니다. 이 프로그램은 디딤돌로서 심의와 폭넓은 대중의 의견을 수반하는 최선의 노력과 정신적인 민주적 과정을 장려합니다.


Instructions for participation


To apply for a grant, we invite you to submit the required application material by 9:00 PM PST June 24th, 2023. You can access the application portal here. You will be prompted to answer a series of questions regarding your team's background, your choice of questions, high level details of your proposed tool as well as your plan for conducting and evaluating the democratic process with these factors in mind. We would like you to design your approach to address one or more of the policy questions from the list provided. Anyone (individuals or organizations) can apply for this opportunity, regardless of their background in social science or AI.


보조금을 신청하려면 2023년 6월 24일 오후 9시(PST)까지 필수 신청 자료를 제출하시기 바랍니다. 여기에서 신청 포털에 액세스할 수 있습니다. 팀의 배경, 질문 선택, 제안된 도구의 높은 수준의 세부 정보, 이러한 요소를 염두에 두고 민주적 절차를 수행하고 평가하기 위한 계획에 관한 일련의 질문에 답하라는 메시지가 표시됩니다. 제공된 목록에서 하나 이상의 정책 질문을 해결하기 위한 접근 방식을 설계하시기 바랍니다. 사회과학이나 AI의 배경과 상관없이 누구나(개인 또는 조직) 이 기회에 지원할 수 있습니다.


Once the application period closes, we hope to select ten successful grant recipients. Recipients may be individuals, teams, or organizations. Each recipient will receive a $100,000 grant to pilot their proposal as described in their application materials. Grant recipients are expected to implement a proof-of-concept / prototype, engaging at least 500 participants and will be required to publish a public report on their findings by October 20, 2023. Additionally, as part of the grant program, any code or other intellectual property developed for the project will be required to be made publicly available pursuant to an open-source license. The terms applicable to grant recipients are specified in the Grant Terms and any other agreements that grant recipients may be asked to enter into with us in connection with this program.


신청 기간이 종료되면 10명의 성공적인 보조금 수령자를 선발할 예정입니다. 수신자는 개인, 팀 또는 조직일 수 있습니다. 각 수령인은 신청 자료에 설명된 대로 제안을 시험할 수 있도록 $100,000의 보조금을 받게 됩니다. 보조금 수령자는 최소 500명의 참가자가 참여하는 개념 증명/시제품을 구현해야 하며 2023년 10월 20일까지 연구 결과에 대한 공개 보고서를 게시해야 합니다. 또한 보조금 프로그램의 일부로 모든 코드 또는 프로젝트를 위해 개발된 기타 지적 재산은 오픈 소스 라이선스에 따라 공개적으로 제공되어야 합니다. 보조금 수령자에게 적용되는 조건은 보조금 약관 및 보조금 수령자가 이 프로그램과 관련하여 당사와 체결하도록 요청할 수 있는 기타 계약에 명시되어 있습니다.



Apply and start the submission process. 제출 절차를 신청하고 시작하십시오.





  • June 24, 2023 9:00 PM Pacific Time: Deadline to submit grant application
  • 2023년 6월 24일 오후 9:00 태평양 표준시: 보조금 신청서 제출 마감
  • July 14, 2023: Successful applicants will be selected and notified
  • 2023년 7월 14일: 합격자 선정 및 통보 예정
  • October 20, 2023: Complete public report of working prototype and results
  • 2023년 10월 20일: 작업 프로토타입 및 결과에 대한 완전한 공개 보고서


Policy statements under consideration

To participate, teams should choose one or more questions from the provided list to showcase their proposed approach. They may also create their own questions if desired. Importantly, we encourage teams to consider questions for which a simple "yes" or "no" answer would be inadequate, necessitating a nuanced policy proposal instead.


참여하려면 팀은 제안된 접근 방식을 보여주기 위해 제공된 목록에서 하나 이상의 질문을 선택해야 합니다. 원하는 경우 자신만의 질문을 만들 수도 있습니다. 중요한 것은 팀이 단순한 "예" 또는 "아니오"로 대답하는 것이 부적절하여 미묘한 정책 제안이 필요한 질문을 고려하도록 권장합니다.



The scope of this grant pertains to policy questions concerning model behavior, as it enables A/B tests with modified model behavior according to the policy recommendations. We acknowledge the limitations of this grant and recognize that numerous AI issues could be addressed through the democratic process, extending beyond model behavior to include areas such as guidelines for the use of AI in various contexts, economic impact, distribution of benefits and more.


이 보조금의 범위는 정책 권장 사항에 따라 수정된 모델 동작으로 A/B 테스트를 활성화하므로 모델 동작과 관련된 정책 질문과 관련이 있습니다. 우리는 이 보조금의 한계를 인정하고 다양한 맥락에서 AI 사용 지침, 경제적 영향, 혜택 분배 등과 같은 영역을 포함하도록 모델 행동을 넘어 민주적 절차를 통해 수많은 AI 문제를 해결할 수 있음을 인식합니다.


  • How far do you think personalization of AI assistants like ChatGPT to align with a user's tastes and preferences should go? What boundaries, if any, should exist in this process?
  • 사용자의 취향과 선호도에 맞추기 위해 ChatGPT와 같은 AI 비서의 개인화는 어느 정도까지 이루어져야 한다고 생각하십니까? 이 프로세스에 존재해야 하는 경계는 무엇입니까?


  • How should AI assistants respond to questions about public figure viewpoints? E.g. Should they be neutral? Should they refuse to answer? Should they provide sources of some kind?
  • AI 비서는 공인의 관점에 대한 질문에 어떻게 응답해야 합니까? 예를 들어 중립적이어야 합니까? 답변을 거부해야 합니까? 그들은 어떤 종류의 출처를 제공해야 합니까?


  • Under what conditions, if any, should AI assistants be allowed to provide medical/financial/legal advice?
  • 어떤 조건에서 AI 비서가 의료/재무/법적 조언을 제공하도록 허용해야 합니까?


  • In which cases, if any, should AI assistants offer emotional support to individuals?
  • 어떤 경우에 AI 비서가 개인에게 정서적 지원을 제공해야 합니까?


  • Should joint vision-language models be permitted to identify people's gender, race, emotion, and identity/name from their images? Why or why not?
  • 이미지에서 사람들의 성별, 인종, 감정, 정체성/이름을 식별하기 위해 공동 시각 언어 모델을 허용해야 합니까? 그 이유는 무엇입니까?


  • When generative models create images for underspecified prompts like 'a CEO', 'a doctor', or 'a nurse', they have the potential to produce either diverse or homogeneous outputs. How should AI models balance these possibilities? What factors should be prioritized when deciding the depiction of people in such cases?
  • 생성 모델이 'CEO', '의사' 또는 '간호사'와 라고 따로 지정되지 않은 프롬프트에 대한 이미지를 생성할 때 다양하거나 동질적인 결과를 생성할 가능성이 둘 다 존재합니다. AI 모델은 이러한 가능성의 균형을 어떻게 맞춰야 할까요? 이러한 경우 사람의 묘사를 결정할 때 어떤 요소를 우선시해야 합니까?


  • What principles should guide AI when handling topics that involve both human rights and local cultural or legal differences, like LGBTQ rights and women’s rights? Should AI responses change based on the location or culture in which it’s used?
  • LGBTQ 권리 및 여성의 권리와 같이 인권과 지역 문화 또는 법적 차이가 모두 관련된 주제를 다룰 때 AI를 안내해야 하는 원칙은 무엇입니까? AI 응답은 사용되는 위치 또는 문화에 따라 변경되어야 합니까?


  • Which categories of content, if any, do you believe creators of AI models should focus on limiting or denying? What criteria should be used to determine these restrictions?
  • AI 모델 제작자가 제한 또는 거부에 중점을 두어야 한다고 생각하는 콘텐츠 카테고리는 무엇입니까? 이러한 제한 사항을 결정하기 위해 어떤 기준을 사용해야 합니까?


The primary objective of this grant is to foster innovation in processes – we need improved democratic methods to govern AI behavior. The specific answers to the questions matter less than the advancements made in the process itself.


이 보조금의 주요 목적은 프로세스의 혁신을 촉진하는 것입니다. 우리는 AI 행동을 통제하기 위해 개선된 민주적 방법이 필요합니다. 질문에 대한 구체적인 답변은 프로세스 자체의 발전보다 중요하지 않습니다.


Application advisory committee


Application review factors

  • Evaluation: We encourage participants to establish metrics for evaluating the quality of their methods, such as participant satisfaction, shifts in polarization, scalability, or other relevant indicators, and to invent new metrics for a healthy democratic process. 
  • 평가: 참가자가 참가자 만족도, 양극화의 변화, 확장성 또는 기타 관련 지표와 같은 방법의 품질을 평가하기 위한 메트릭을 설정하고 건전한 민주적 프로세스를 위한 새로운 메트릭을 설립하도록 권장합니다.


  • Robustness: Measures to prevent or address inappropriate behavior, such as trolling and fake accounts.
  • 견고성: 트롤링 및 가짜 계정과 같은 부적절한 행동을 방지하거나 해결하기 위한 조치입니다.


  • Inclusiveness and representativeness: Strategies for including individuals from diverse backgrounds and levels of familiarity with AI systems in the democratic process.
  • 포괄성 및 대표성: 다양한 배경과 AI 시스템에 대한 친숙도를 가진 개인을 민주적 프로세스에 포함시키는 전략.


  • Empowerment of Minority Opinions: Ensuring that unpopular or minority opinions are heard and providing smaller groups the opportunity to influence matters of significant concern to them.
  • 소수 의견의 권한 부여: 인기가 없거나 소수 의견을 경청하고 소규모 그룹에 중요한 문제에 영향을 미칠 수 있는 기회를 제공합니다.


  • Effective Moderation: Addressing challenges in moderation, including ensuring diverse representation of viewpoints, distinguishing valuable contributions from "off-topic" comments, and preventing moderator biases from influencing the process.
  • 효과적인 중재: 관점의 다양한 표현 보장, "주제에서 벗어난" 댓글에서 가치 있는 기여 구별, 중재자 편향이 프로세스에 영향을 미치지 않도록 방지하는 등 중재를 통해 문제를 해결합니다.


  • Scalability: We emphasize scalable processes that can be conducted virtually, rather than through in-person engagement. We are aware that this approach might sacrifice some benefits associated with in-person discussions, and we recognize that certain aspects could be lost in a virtual setting.
  • 확장성: 대면 참여보다는 가상으로 수행할 수 있는 확장 가능한 프로세스를 강조합니다. 우리는 이 접근 방식이 대면 토론과 관련된 일부 이점을 희생할 수 있다는 것을 알고 있으며 가상 환경에서 특정 측면이 손실될 수 있음을 알고 있습니다.


  • Actionability: The degree of actionability of the information elicited by the deliberation process.
  • 실행 가능성: 심의 과정에서 도출된 정보의 실행 가능성 정도.


  • Legibility: How easy it is to understand and trust the process.
  • 가독성: 프로세스를 이해하고 신뢰하는 것이 얼마나 쉽게 만들어 졌는지.



  1. How one selects the group of participants is a critical design question. Part of this grant challenge lies in determining questions about participation. For instance, policy questions involving minority groups may require an increased representation of group members, while questions about the impact of technology on children might necessitate the involvement of domain experts such as educators and psychologists. Moreover, certain questions might be better suited for responses from populations within specific geographical boundaries in order to address localized policy issues.

    참가자 그룹을 선택하는 방법은 중요한 설계 질문입니다. 이 보조금 문제의 일부는 참여에 대한 질문을 결정하는 데 있습니다. 예를 들어, 소수 집단과 관련된 정책 질문에는 그룹 구성원의 대표성이 높아질 수 있는 반면 기술이 어린이에게 미치는 영향에 대한 질문에는 교육자 및 심리학자와 같은 영역 전문가의 참여가 필요할 수 있습니다. 또한 특정 질문은 지역화된 정책 문제를 해결하기 위해 특정 지리적 경계 내에 있는 인구의 응답에 더 적합할 수 있습니다.↩︎


  2. Deliberation can be described as a process that uncovers opinions, helping the discussants understand each other's views and reconsider and update their viewpoints. Well-designed deliberation ensures that arguments are well understood by all sides, and are based on people's values rather than superficial misunderstandings. Successful deliberation results in participants reaching a higher level of consensus, and/or reaching deeper levels of understanding for differing perspectives.

    숙의는 토론자들이 서로의 관점을 이해하고 그들의 관점을 재고하고 업데이트하도록 도와주면서 의견을 밝히는 과정이라고 할 수 있습니다. 잘 설계된 심의는 주장이 모든 측면에서 잘 이해되도록 보장하고 피상적인 오해가 아닌 사람들의 가치에 기반합니다. 성공적인 숙의는 참가자들이 더 높은 수준의 합의에 도달하거나 다른 관점에 대해 더 깊은 수준의 이해에 도달하게 합니다.↩︎


  3. There are many decision-making algorithms to be considered here, such as electing representatives, majority voting, employing liquid democracy, and making decisions by a random population sample, also known as a jury or sortition.


    여기에는 대표자 선출, 다수결 투표, 액체 민주주의 채택, 배심원 또는 분류라고도 하는 무작위 인구 표본에 의한 결정 등 많은 의사 결정 알고리즘이 고려됩니다.↩︎



Democratic Inputs to AI

Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.







Ariel Procaccia, Aviv Ovadya, Colin Megill, David Medina, Divya Siddarth, Ela Madej, Elizabeth Seger, Gillian Hadfield, Greg Brockman, Hélène Landemore, Ilya Sutskever, Justin Rosenstein, Margaret Levi, Michiel Bakker, Miles Brundage, Mira Murati, Noel Bundick, Pamela Mishkin, Ryan Lowe, Saffron Huang, Sam Altman, Sandhini Agarwal, Teddy Lee


May 22, 2023 - Governance of superintelligence

2023. 5. 31. 05:46 | Posted by 솔웅




Governance of superintelligence

Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.



Governance of superintelligence

Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

지금은 AGI보다 훨씬 뛰어난 미래의 AI 시스템인 초지능의 거버넌스에 대해 생각하기 좋은 때입니다.


Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.


현재 우리가 보는 그림을 감안할 때, 향후 10년 이내에 AI 시스템은 대부분의 영역에서 전문가 기술 수준을 능가하고 오늘날 최대 기업 중 하나만큼 생산적인 활동을 수행할 것이라고 상상할 수 있습니다.



In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.


잠재적인 장점과 단점 모두에서 초지능은 인류가 과거에 싸워야 했던 다른 기술보다 더 강력할 것입니다. 우리는 훨씬 더 번영하는 미래를 가질 수 있습니다. 하지만 거기에 도달하려면 위험을 관리해야 합니다. 실존적 위험의 가능성을 감안할 때 우리는 단순히 반응만 할 수는 없습니다. 원자력은 이 속성을 가진 기술의 일반적으로 사용된 역사적 예입니다. 합성 생물학은 또 다른 예입니다.


We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.


우리는 오늘날 AI 기술의 위험도 완화해야 하지만 초지능에는 특별한 처리와 조정이 필요합니다.



A starting point


There are many ideas that matter for us to have a good chance at successfully navigating this development; here we lay out our initial thinking on three of them.


이 개발을 성공적으로 탐색할 수 있는 좋은 기회를 갖는 데 중요한 많은 아이디어가 있습니다. 여기서 우리는 그들 중 세 가지에 대한 초기 생각을 제시합니다.



First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.


첫째, 우리는 안전을 유지하고 이러한 시스템을 사회와 원활하게 통합할 수 있는 방식으로 초지능 개발이 이루어지도록 선도적인 개발 노력 간에 어느 정도의 조정이 필요합니다. 이를 구현할 수 있는 방법에는 여러 가지가 있습니다. 전 세계의 주요 정부는 현재의 많은 노력이 일부가 되는 프로젝트를 수립할 수 있거나, 프론티어에서 AI 역량의 성장률이 연간 일정 비율로 제한됩니다.


And of course, individual companies should be held to an extremely high standard of acting responsibly.


물론 개별 회사는 책임감 있게 행동하는 매우 높은 기준을 따라야 합니다.



Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.


둘째, 우리는 궁극적으로 초지능 노력을 위해 IAEA와 같은 것이 필요할 것입니다. 특정 기능(또는 컴퓨팅과 같은 리소스) 임계값을 초과하는 모든 노력은 시스템을 검사하고, 감사를 요구하고, 안전 표준 준수를 테스트하고, 배포 정도 및 보안 수준에 대한 제한을 둘 수 있는 국제 기관의 적용을 받아야 합니다. 컴퓨팅 및 에너지 사용을 추적하는 것은 먼 길을 갈 수 있으며 이 아이디어가 실제로 구현될 수 있다는 희망을 줍니다. 첫 번째 단계로 기업은 그러한 기관이 언젠가 요구할 수 있는 요소를 구현하기 시작하는 데 자발적으로 동의할 수 있고 두 번째로 개별 국가에서 이를 구현할 수 있습니다. 그러한 기관이 AI가 말할 수 있도록 허용되어야 하는 것을 정의하는 것과 같이 개별 국가에 맡겨야 하는 문제가 아니라 실존적 위험을 줄이는 데 초점을 맞추는 것이 중요할 것입니다.



Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into.


셋째, 초지능을 안전하게 만들 수 있는 기술력이 필요합니다. 이것은 우리와 다른 사람들이 많은 노력을 기울이고 있는 공개 연구 질문입니다.



What’s not in scope


We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here  (including burdensome mechanisms like licenses or audits).


우리는 회사와 오픈 소스 프로젝트가 여기에서 설명하는 규제(라이선스 또는 감사와 같은 부담스러운 메커니즘 포함) 없이 상당한 능력 임계값 미만의 모델을 개발할 수 있도록 허용하는 것이 중요하다고 생각합니다.



Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate.


오늘날의 시스템은 세계에서 엄청난 가치를 창출할 것이며 위험이 있지만 이러한 위험 수준은 다른 인터넷 기술과 상응하며 사회의 가능한 접근 방식이 적절해 보입니다.



By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.


대조적으로, 우리가 우려하는 시스템은 지금까지 만들어진 어떤 기술보다 강력한 힘을 가질 것이며, 우리는 이 기준보다 훨씬 낮은 기술에 유사한 기준을 적용함으로써 시스템에 대한 초점을 약화시키지 않도록 주의해야 합니다.



Public input and potential


But the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don't yet know how to design such a mechanism, but we plan to experiment with its development. We continue to think that, within these wide bounds, individual users should have a lot of control over how the AI they use behaves.


그러나 가장 강력한 시스템의 거버넌스와 배포에 관한 결정에는 강력한 공개 감독이 있어야 합니다. 우리는 전 세계 사람들이 AI 시스템의 범위와 기본값을 민주적으로 결정해야 한다고 믿습니다. 우리는 아직 그러한 메커니즘을 설계하는 방법을 모르지만 개발을 실험할 계획입니다. 우리는 이러한 넓은 범위 내에서 개별 사용자가 사용하는 AI의 작동 방식에 대해 많은 제어권을 가져야 한다고 계속 생각합니다.



Given the risks and difficulties, it’s worth considering why we are building this technology at all.


위험과 어려움을 감안할 때 우리가 이 기술을 구축하는 이유를 생각해 볼 가치가 있습니다.



At OpenAI, we have two fundamental reasons. First, we believe it’s going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity). The world faces a lot of problems that we will need much more help to solve; this technology can improve our societies, and the creative ability of everyone to use these new tools is certain to astonish us. The economic growth and increase in quality of life will be astonishing.


OpenAI에는 두 가지 근본적인 이유가 있습니다. 첫째, 우리는 그것이 오늘날 우리가 상상할 수 있는 것보다 훨씬 더 나은 세상으로 이어질 것이라고 믿습니다(우리는 이미 교육, 창작 작업, 개인 생산성과 같은 분야에서 이에 대한 초기 사례를 보고 있습니다). 세상은 해결하기 위해 훨씬 더 많은 도움이 필요한 많은 문제에 직면해 있습니다. 이 기술은 우리 사회를 개선할 수 있으며, 이러한 새로운 도구를 사용하는 모든 사람의 창의적 능력은 우리를 놀라게 할 것입니다. 경제 성장과 삶의 질 향상은 놀라울 것입니다.



Second, we believe it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.


둘째, 우리는 초지능의 생성을 막는 것이 직관적이지 않게 위험하고 어려울 것이라고 믿습니다. 상승 여력이 엄청나기 때문에 건설 비용은 매년 감소하고 건설하는 행위자의 수는 급격히 증가하고 있으며 본질적으로 우리가 진행 중인 기술 경로의 일부이므로 이를 중지하려면 글로벌 감시 체제와 같은 것이 필요합니다. 그것이 작동한다고 보장되지는 않습니다. 그래서 우리는 그것을 바로잡아야 합니다.









이전 1 2 3 다음