반응형
블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

카테고리

May 22, 2023 - Governance of superintelligence

2023. 5. 31. 05:46 | Posted by 솔웅


반응형

https://openai.com/blog/governance-of-superintelligence

 

Governance of superintelligence

Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

openai.com

 

Governance of superintelligence

Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

지금은 AGI보다 훨씬 뛰어난 미래의 AI 시스템인 초지능의 거버넌스에 대해 생각하기 좋은 때입니다.

 

Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.

 

현재 우리가 보는 그림을 감안할 때, 향후 10년 이내에 AI 시스템은 대부분의 영역에서 전문가 기술 수준을 능가하고 오늘날 최대 기업 중 하나만큼 생산적인 활동을 수행할 것이라고 상상할 수 있습니다.

 

 

In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.

 

잠재적인 장점과 단점 모두에서 초지능은 인류가 과거에 싸워야 했던 다른 기술보다 더 강력할 것입니다. 우리는 훨씬 더 번영하는 미래를 가질 수 있습니다. 하지만 거기에 도달하려면 위험을 관리해야 합니다. 실존적 위험의 가능성을 감안할 때 우리는 단순히 반응만 할 수는 없습니다. 원자력은 이 속성을 가진 기술의 일반적으로 사용된 역사적 예입니다. 합성 생물학은 또 다른 예입니다.

 

We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.

 

우리는 오늘날 AI 기술의 위험도 완화해야 하지만 초지능에는 특별한 처리와 조정이 필요합니다.

 

 

A starting point

 

There are many ideas that matter for us to have a good chance at successfully navigating this development; here we lay out our initial thinking on three of them.

 

이 개발을 성공적으로 탐색할 수 있는 좋은 기회를 갖는 데 중요한 많은 아이디어가 있습니다. 여기서 우리는 그들 중 세 가지에 대한 초기 생각을 제시합니다.

 

 

First, we need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society. There are many ways this could be implemented; major governments around the world could set up a project that many current efforts become part of, or we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.

 

첫째, 우리는 안전을 유지하고 이러한 시스템을 사회와 원활하게 통합할 수 있는 방식으로 초지능 개발이 이루어지도록 선도적인 개발 노력 간에 어느 정도의 조정이 필요합니다. 이를 구현할 수 있는 방법에는 여러 가지가 있습니다. 전 세계의 주요 정부는 현재의 많은 노력이 일부가 되는 프로젝트를 수립할 수 있거나, 프론티어에서 AI 역량의 성장률이 연간 일정 비율로 제한됩니다.

 

And of course, individual companies should be held to an extremely high standard of acting responsibly.

 

물론 개별 회사는 책임감 있게 행동하는 매우 높은 기준을 따라야 합니다.

 

 

Second, we are likely to eventually need something like an IAEA for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc. Tracking compute and energy usage could go a long way, and give us some hope this idea could actually be implementable. As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.

 

둘째, 우리는 궁극적으로 초지능 노력을 위해 IAEA와 같은 것이 필요할 것입니다. 특정 기능(또는 컴퓨팅과 같은 리소스) 임계값을 초과하는 모든 노력은 시스템을 검사하고, 감사를 요구하고, 안전 표준 준수를 테스트하고, 배포 정도 및 보안 수준에 대한 제한을 둘 수 있는 국제 기관의 적용을 받아야 합니다. 컴퓨팅 및 에너지 사용을 추적하는 것은 먼 길을 갈 수 있으며 이 아이디어가 실제로 구현될 수 있다는 희망을 줍니다. 첫 번째 단계로 기업은 그러한 기관이 언젠가 요구할 수 있는 요소를 구현하기 시작하는 데 자발적으로 동의할 수 있고 두 번째로 개별 국가에서 이를 구현할 수 있습니다. 그러한 기관이 AI가 말할 수 있도록 허용되어야 하는 것을 정의하는 것과 같이 개별 국가에 맡겨야 하는 문제가 아니라 실존적 위험을 줄이는 데 초점을 맞추는 것이 중요할 것입니다.

 

 

Third, we need the technical capability to make a superintelligence safe. This is an open research question that we and others are putting a lot of effort into.

 

셋째, 초지능을 안전하게 만들 수 있는 기술력이 필요합니다. 이것은 우리와 다른 사람들이 많은 노력을 기울이고 있는 공개 연구 질문입니다.

 

 

What’s not in scope

 

We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here  (including burdensome mechanisms like licenses or audits).

 

우리는 회사와 오픈 소스 프로젝트가 여기에서 설명하는 규제(라이선스 또는 감사와 같은 부담스러운 메커니즘 포함) 없이 상당한 능력 임계값 미만의 모델을 개발할 수 있도록 허용하는 것이 중요하다고 생각합니다.

 

 

Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate.

 

오늘날의 시스템은 세계에서 엄청난 가치를 창출할 것이며 위험이 있지만 이러한 위험 수준은 다른 인터넷 기술과 상응하며 사회의 가능한 접근 방식이 적절해 보입니다.

 

 

By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.

 

대조적으로, 우리가 우려하는 시스템은 지금까지 만들어진 어떤 기술보다 강력한 힘을 가질 것이며, 우리는 이 기준보다 훨씬 낮은 기술에 유사한 기준을 적용함으로써 시스템에 대한 초점을 약화시키지 않도록 주의해야 합니다.

 

 

Public input and potential

 

But the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don't yet know how to design such a mechanism, but we plan to experiment with its development. We continue to think that, within these wide bounds, individual users should have a lot of control over how the AI they use behaves.

 

그러나 가장 강력한 시스템의 거버넌스와 배포에 관한 결정에는 강력한 공개 감독이 있어야 합니다. 우리는 전 세계 사람들이 AI 시스템의 범위와 기본값을 민주적으로 결정해야 한다고 믿습니다. 우리는 아직 그러한 메커니즘을 설계하는 방법을 모르지만 개발을 실험할 계획입니다. 우리는 이러한 넓은 범위 내에서 개별 사용자가 사용하는 AI의 작동 방식에 대해 많은 제어권을 가져야 한다고 계속 생각합니다.

 

 

Given the risks and difficulties, it’s worth considering why we are building this technology at all.

 

위험과 어려움을 감안할 때 우리가 이 기술을 구축하는 이유를 생각해 볼 가치가 있습니다.

 

 

At OpenAI, we have two fundamental reasons. First, we believe it’s going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity). The world faces a lot of problems that we will need much more help to solve; this technology can improve our societies, and the creative ability of everyone to use these new tools is certain to astonish us. The economic growth and increase in quality of life will be astonishing.

 

OpenAI에는 두 가지 근본적인 이유가 있습니다. 첫째, 우리는 그것이 오늘날 우리가 상상할 수 있는 것보다 훨씬 더 나은 세상으로 이어질 것이라고 믿습니다(우리는 이미 교육, 창작 작업, 개인 생산성과 같은 분야에서 이에 대한 초기 사례를 보고 있습니다). 세상은 해결하기 위해 훨씬 더 많은 도움이 필요한 많은 문제에 직면해 있습니다. 이 기술은 우리 사회를 개선할 수 있으며, 이러한 새로운 도구를 사용하는 모든 사람의 창의적 능력은 우리를 놀라게 할 것입니다. 경제 성장과 삶의 질 향상은 놀라울 것입니다.

 

 

Second, we believe it would be unintuitively risky and difficult to stop the creation of superintelligence. Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on, stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.

 

둘째, 우리는 초지능의 생성을 막는 것이 직관적이지 않게 위험하고 어려울 것이라고 믿습니다. 상승 여력이 엄청나기 때문에 건설 비용은 매년 감소하고 건설하는 행위자의 수는 급격히 증가하고 있으며 본질적으로 우리가 진행 중인 기술 경로의 일부이므로 이를 중지하려면 글로벌 감시 체제와 같은 것이 필요합니다. 그것이 작동한다고 보장되지는 않습니다. 그래서 우리는 그것을 바로잡아야 합니다.

 

 

 

 

 

 

 

 

반응형