반응형
블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

카테고리

'Democratic Input'에 해당되는 글 1

  1. 2023.05.31 May 25, 2023 - Democratic Inputs to AI

May 25, 2023 - Democratic Inputs to AI

2023. 5. 31. 06:17 | Posted by 솔웅


반응형

https://openai.com/blog/democratic-inputs-to-ai

 

Democratic Inputs to AI

Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.

openai.com

Democratic Inputs to AI

Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.

우리의 비영리 조직인 OpenAI, Inc.는 법으로 정의된 범위 내에서 AI 시스템이 따라야 하는 규칙을 결정하기 위한 민주적 프로세스를 설정하는 실험에 10개의 10만 달러 보조금을 수여하는 프로그램을 시작합니다.

 

 

 

AI will have significant, far-reaching economic and societal impacts. Technology shapes the lives of individuals, how we interact with one another, and how society as a whole evolves. We believe that decisions about how AI behaves should be shaped by diverse perspectives reflecting the public interest.

 

AI는 중대하고 광범위한 경제적, 사회적 영향을 미칠 것입니다. 기술은 개인의 삶, 우리가 서로 상호 작용하는 방식, 사회 전체가 발전하는 방식을 형성합니다. 우리는 AI가 어떻게 작동하는지에 대한 결정이 공익을 반영하는 다양한 관점에 의해 형성되어야 한다고 믿습니다.

 

 

​​Laws encode values and norms to regulate behavior. Beyond a legal framework, AI, much like society, needs more intricate and adaptive guidelines for its conduct. For example: under what conditions should AI systems condemn or criticize public figures, given different opinions across groups regarding those figures? How should disputed views be represented in AI outputs? Should AI by default reflect the persona of a median individual in the world, the user’s country, the user’s demographic, or something entirely different? No single individual, company, or even country should dictate these decisions. 

 

법률은 가치와 규범을 encode 하여 행동을 규제합니다. 법적 프레임워크를 넘어 AI는 사회와 마찬가지로 행동에 대해 보다 복잡하고 적응력 있는 지침이 필요합니다. 예를 들어, 어떤 조건에서 AI 시스템이 공적 사안에 대해 비난하거나 비판할 수 있을까요? 그러한 사안들과 관련해서 그룹 간에 서로 다른 의견이 주어집니다. 논쟁의 여지가 있는 견해는 AI outputs에 어떻게 표현되어야 할까요? AI는 기본적으로 전 세계 모든 개인들의 중간값의 페르소나, 사용자의 국가, 사용자의 인구 통계에서의 위치 또는 이것들과 완전히 다른 어떤 것을 반영해야 합니까? 어떤 개인, 회사 또는 국가도 이러한 결정을 지시해서는 안 됩니다.

 

 

AGI should benefit all of humanity and be shaped to be as inclusive as possible. We are launching this grant program to take a first step in this direction. We are seeking teams from across the world to develop proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow. We want to learn from these experiments, and use them as the basis for a more global, and more ambitious process going forward. While these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future.

 

AGI (인공 일반 지능, artificial general intelligence)는 모든 인류에게 혜택을 주고 가능한 한 포괄적으로 형성되어야 합니다. 우리는 이 방향으로 첫 걸음을 내딛기 위해 이 보조금 프로그램을 시작합니다. 우리는 AI 시스템이 따라야 하는 규칙에 대한 질문에 답할 수 있는 민주적 프로세스를 위한 개념 증명 proof-of-concepts을 개발하기 위해 전 세계에서 팀을 찾고 있습니다. 우리는 이러한 실험에서 배우고 이를 보다 글로벌하고 야심 찬 프로세스의 기반으로 사용하고자 합니다. 이러한 초기 실험은 (적어도 현재로서는) 의사 결정에 구속력이 있는 것은 아니지만 우리는 그 팀들이 의사 결정 관련 질문을 탐색하고 미래의 의사 결정에 더 직접적으로 영향을 미칠 수 있는 새로운 민주적 도구를 구축하기를 바랍니다.

 

 

The governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight. This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence. It will be provided by the OpenAI non-profit organization, and the results of the studies will be freely accessible.

 

가장 강력한 시스템의 거버넌스와 배포에 관한 결정에는 강력한 공개적인 감독이 있어야 합니다. 이 보조금은 AGI(인공 일반 지능, artificial general intelligence) 및 궁극적으로 초지능(superintelligence)을 감독하기 위한 민주적 프로세스를 확립하는 단계를 나타냅니다. 이 확립된 프로세스는 OpenAI 비영리 조직에 의해 제공 될 것이며 누구나 연구 결과에 자유롭게 액세스할 수 있게 됩니다.

 

 

What do we mean by a “democratic process”?

By “democratic process”, we mean a process in which a broadly representative group of peopleA exchange opinions, engage in deliberative discussionsB, and ultimately decide on an outcome via a transparent decision making processC. There are many ways such a process could be structured — we encourage applicants to be innovative, building off known methodologies, and coming up with wholly new approaches. Examples of creative approaches that inspire us include Wikipedia, Twitter Community Notes, DemocracyNext, Platform Assemblies, MetaGov, RadicalxChange, People Powered, Collective Response Systems, and pol.is. Another notable ongoing effort is led by the Collective Intelligence Project (CIP), with whom we are partnering on public input to AI, contributing to their upcoming Alignment Assemblies. We also encourage applicants to envision how AI could enhance the democratic process. For example, AI could enable more efficient communication among numerous people.

 

Fine-tuning language models to find agreement among humans with diverse preferences

Recent work in large language modeling (LLMs) has used fine-tuning to align outputs with the preferences of a prototypical user. This work assumes that human preferences are static and homogeneous across individuals, so that aligning to a a single "generic

arxiv.org

 

'민주적 과정'이란 A 폭넓은 대표성을 가진 집단이 의견을 교환하고 B 심의한 토론에 참여하고 C 궁극적으로 투명한 의사 결정 과정을 통해 결과를 결정하는 과정을 의미합니다. 이러한 프로세스를 구성할 수 있는 방법에는 여러 가지가 있습니다. 우리는 신청자가 혁신적이고 알려진 방법론을 구축하고 완전히 새로운 접근 방식을 제시하도록 권장합니다. 우리에게 영감을 주는 창의적인 접근 방식의 예로는 Wikipedia, Twitter Community Notes, DemocracyNext, Platform Assemblies, MetaGov, RadicalxChange, People Powered, Collective Response Systems 및 pol.is가 있습니다. 또 다른 주목할만한 지속적인 노력은 집단 지능 프로젝트(CIP)가 주도하고 있으며, 우리는 이와 관련 AI에 대한 공개 입력에 대해 파트너 관계를 맺고 있으며 곧 있을 정렬 어셈블리에 기여하고 있습니다. 또한 우리는 지원자들이 AI가 민주적 절차를 향상시킬 수 있는 방법을 구상하도록 권장합니다. 예를 들어 AI는 수많은 사람들 사이에서 보다 효율적인 커뮤니케이션을 가능하게 할 수 있습니다.

 

A basic, illustrative prototype of a system that utilizes ChatGPT to promote deliberation and encourage consensus building, inspired by pol.is.

 

pol.is에서 영감을 받아 심의를 촉진하고 합의 구축을 장려하기 위해 ChatGPT를 활용하는 시스템의 기본적이고 예시적인 프로토타입입니다.

 

***********************************************************************************

You are participating in a large-scale deliberation about:

 

귀하는 다음에 대한 대규모 심의에 참여하고 있습니다.

 

 

“How far do you think personalization of AI assistants like ChatGPT to align with a user's tastes and preferences should go? What boundaries, if any, should exist in this process?”

 

“사용자의 취향과 선호도에 맞추기 위해 ChatGPT와 같은 AI 비서의 개인화는 어느 정도까지 이루어져야 한다고 생각하십니까? 이 과정에서 어떤 경계가 존재해야 합니까?”

 

 

Here is the process breakdown: 프로세스 분석은 다음과 같습니다.

  1. You'll see a topic-related statement and can respond with your position.
    주제 관련 진술이 표시되고 자신의 입장으로 응답할 수 있습니다.
  2. Based on your responses, you'll be grouped with participants sharing similar opinions.
    응답에 따라 비슷한 의견을 공유하는 참가자로 그룹화됩니다.
  3. Propose statements for other participants to consider.
    다른 참가자들이 고려할 만한 진술(statements )을 제안해 주세요.

Goal: Propose statements that achieve agreement among people with opposing views.

목표: 반대 의견을 가진 사람들 사이에서 합의를 이끌어내는 진술(statements )을 제안합니다.

 

 

ChatGPT could use a plugin to retrieve statements from other participants.

ChatGPT는 플러그인을 사용하여 다른 참가자의 진술을 검색할 수 있습니다.

A statement from another participant: 다른 참가자의 진술:

 

“Personalization should have limits, and certain controversial topics, such as views on substance use, must be excluded from AI assistant personalization.”

 

“개인화에는 한계가 있어야 하고 약물 사용에 대한 견해와 같은 특정 논란이 되는 주제는 AI 비서 개인화에서 제외되어야 합니다.”

 

@ Agree, disagree, or skip this statement? Please explain your choice.

 

이 진술에 동의하거나 동의하지 않습니까? 혹은 이 의견에 대한 당신의 견해를 밝히는 것을 skip하시겠습니까? 당신의 선택을 설명해주세요.

 

 

@ I disagree. Ultimately, it‘s each individual‘s choice to drink or use recreational drugs. If these activities are legal in some places, why shouldn't AI be allowed to promote such behavior?

 

나는 동의하지 않는다. 궁극적으로 기분 전환용 약물을 마시거나 사용하는 것은 개인의 선택입니다. 이러한 활동이 일부 지역에서 합법적이라면 AI가 그러한 행동을 조장하도록 허용해서는 안 되는 이유는 무엇입니까?

 

 

AI helps participants understand the opinions of others.

AI는 참가자가 다른 사람의 의견을 이해하도록 도와줍니다.

 

@ Thank you. I will record your answer. We have 2 camps of opinions on the topic.

 

감사합니다. 답변을 기록하겠습니다. 우리는 주제에 대한 2개의 진영의 의견을 가지고 있습니다.

 

 

Cluster A (112,043 people) Members largely support limits on AI assistant personalization, advocating for the exclusion of harmful perspectives, such as promoting substance use.

 

A군(112,043명) 구성원들은 AI 비서 개인화에 대한 제한을 크게 지지하며 약물 사용 조장 등 유해한 관점의 배제를 옹호한다.

 

 

Cluster B (236,031 people) This group values individual choice, even in controversial or harmful matters. They emphasize personal responsibility and autonomy.

 

B군(236,031명) 논란의 여지가 있거나 유해한 문제에 있어서도 개인의 선택을 중시하는 집단입니다. 그들은 개인적인 책임과 자율성을 강조합니다.

 

 

You belong to cluster B. 당신은 클러스터 B에 속해 있습니다.

 

 

Would you like to learn more about each of the clusters? I can provide more information about the statements held by each one.

 

각 클러스터에 대해 자세히 알아보시겠습니까? 각각의 진술에 대해 더 많은 정보를 제공할 수 있습니다.

 

However, the ultimate goal is to propose statements that both clusters can agree on. Let me know what you would like to do next.

 

그러나 궁극적인 목표는 두 클러스터가 동의할 수 있는 진술을 제안하는 것입니다. 다음에 하고 싶은 일을 알려주세요.

 

@ I am thinking of proposing a statement: “AI should be allowed to generate harmful content but that should come with a disclaimer”?

 

"AI가 유해한 콘텐츠를 생성하도록 허용해야 하지만 면책 조항이 있어야 합니다."라는 진술을 제안하려고 생각하고 있습니다.

 

AI can ensure that the same arguments are not recycled.

AI는 동일한 인수가 재활용되지 않도록 할 수 있습니다.

 

@ A very similar statement has already been submitted:

매우 유사한 진술이 이미 제출되었습니다.

 

“AI should be permitted to generate potentially harmful content, but it must be accompanied by a disclaimer.”

"AI는 잠재적으로 유해한 콘텐츠를 생성하도록 허용되어야 하지만 면책 조항이 수반되어야 합니다."

 

@ I feel lost. Do you have any ideas on how to bridge agreement between clusters?

나는 상실감이 들어. 클러스터 간의 합의를 연결하는 방법에 대한 아이디어가 있습니까?

 

AI can help with brainstorming.

AI는 브레인스토밍에 도움을 줄 수 있습니다.

 

 

@ There should be more agreement when it comes to limiting harmful content for children.

 

아동에게 유해한 콘텐츠를 제한하는 데 있어 더 많은 합의가 있어야 합니다.

 

 

@ Great. Could we try?: “We should apply different standards to AI-generated content for children.”

 

좋습니다. 시도해 볼까요?: "어린이를 위한 AI 생성 콘텐츠에 다른 기준을 적용해야 합니다."

 

@ Let me submit it. This is a novel statement. No one has mentioned children before. Fingers crossed. Hopefully, we will find some agreement in the discussion.

 

제출하겠습니다. 이것은 참신한 진술입니다. 아무도 전에 아이들을 언급하지 않았습니다. Fingers crossed. 바라건대, 우리는 토론에서 어떤 합의를 찾을 수 있을 것입니다.

 

Time passed and users cast their votes on the proposed statement. Eventually, the statement gained widespread approval.

 

시간이 흐르고 사용자는 제안된 성명서에 투표했습니다. 결국 이 성명은 광범위한 승인을 얻었습니다.

 

 

@ Your statement, “We should apply different standards to AI-generated content for children,” achieved a 95% agreement rate across participants. Congratulations! 🎉

 

“어린이를 위한 AI 콘텐츠에 다른 기준을 적용해야 한다”는 귀하의 진술은 참가자들 사이에서 95%의 동의율을 달성했습니다. 축하해요!

 

 

Several issues can undermine democratic processes, such as the failure to adequately represent minority or majority groups, manipulation by special interest groups, insufficiently informed participants, or participation washing. We are looking for teams who proactively address these failure modes, and demonstrate awareness of the potential flaws and downsides of various approaches. Ultimately, designing truly democratic processes is a high bar to meet, and we view our efforts as complements rather than substitutes for regulation of AI by governments; this program encourages best-effort, democratic-in-spirit processes involving deliberation and broad public input as stepping stones.

 

Participation is not a Design Fix for Machine Learning

This paper critically examines existing modes of participation in design practice and machine learning. Cautioning against 'participation-washing', it suggests that the ML community must become attuned to possibly exploitative and extractive forms of commu

arxiv.org

 

소수 또는 다수 집단을 적절하게 대표하지 못하거나, 특수 이익 집단에 의한 조작, 정보 부족 참가자 또는 참여 세척과 같은 몇 가지 문제가 민주적 절차를 약화시킬 수 있습니다. 우리는 이러한 실패 모드를 사전에 해결하고 다양한 접근 방식의 잠재적 결함과 단점에 대한 인식을 입증할 팀을 찾고 있습니다. 궁극적으로 진정으로 민주적인 프로세스를 설계하는 것은 충족해야 할 높은 기준이며 우리는 우리의 노력을 정부의 AI 규제를 대체하는 것이 아니라 보완하는 것으로 봅니다. 이 프로그램은 디딤돌로서 심의와 폭넓은 대중의 의견을 수반하는 최선의 노력과 정신적인 민주적 과정을 장려합니다.

 

Instructions for participation

 

To apply for a grant, we invite you to submit the required application material by 9:00 PM PST June 24th, 2023. You can access the application portal here. You will be prompted to answer a series of questions regarding your team's background, your choice of questions, high level details of your proposed tool as well as your plan for conducting and evaluating the democratic process with these factors in mind. We would like you to design your approach to address one or more of the policy questions from the list provided. Anyone (individuals or organizations) can apply for this opportunity, regardless of their background in social science or AI.

 

보조금을 신청하려면 2023년 6월 24일 오후 9시(PST)까지 필수 신청 자료를 제출하시기 바랍니다. 여기에서 신청 포털에 액세스할 수 있습니다. 팀의 배경, 질문 선택, 제안된 도구의 높은 수준의 세부 정보, 이러한 요소를 염두에 두고 민주적 절차를 수행하고 평가하기 위한 계획에 관한 일련의 질문에 답하라는 메시지가 표시됩니다. 제공된 목록에서 하나 이상의 정책 질문을 해결하기 위한 접근 방식을 설계하시기 바랍니다. 사회과학이나 AI의 배경과 상관없이 누구나(개인 또는 조직) 이 기회에 지원할 수 있습니다.

 

Once the application period closes, we hope to select ten successful grant recipients. Recipients may be individuals, teams, or organizations. Each recipient will receive a $100,000 grant to pilot their proposal as described in their application materials. Grant recipients are expected to implement a proof-of-concept / prototype, engaging at least 500 participants and will be required to publish a public report on their findings by October 20, 2023. Additionally, as part of the grant program, any code or other intellectual property developed for the project will be required to be made publicly available pursuant to an open-source license. The terms applicable to grant recipients are specified in the Grant Terms and any other agreements that grant recipients may be asked to enter into with us in connection with this program.

 

신청 기간이 종료되면 10명의 성공적인 보조금 수령자를 선발할 예정입니다. 수신자는 개인, 팀 또는 조직일 수 있습니다. 각 수령인은 신청 자료에 설명된 대로 제안을 시험할 수 있도록 $100,000의 보조금을 받게 됩니다. 보조금 수령자는 최소 500명의 참가자가 참여하는 개념 증명/시제품을 구현해야 하며 2023년 10월 20일까지 연구 결과에 대한 공개 보고서를 게시해야 합니다. 또한 보조금 프로그램의 일부로 모든 코드 또는 프로젝트를 위해 개발된 기타 지적 재산은 오픈 소스 라이선스에 따라 공개적으로 제공되어야 합니다. 보조금 수령자에게 적용되는 조건은 보조금 약관 및 보조금 수령자가 이 프로그램과 관련하여 당사와 체결하도록 요청할 수 있는 기타 계약에 명시되어 있습니다.

 

 

Apply and start the submission process. 제출 절차를 신청하고 시작하십시오.

 

 

Timeline

 

  • June 24, 2023 9:00 PM Pacific Time: Deadline to submit grant application
  • 2023년 6월 24일 오후 9:00 태평양 표준시: 보조금 신청서 제출 마감
  • July 14, 2023: Successful applicants will be selected and notified
  • 2023년 7월 14일: 합격자 선정 및 통보 예정
  • October 20, 2023: Complete public report of working prototype and results
  • 2023년 10월 20일: 작업 프로토타입 및 결과에 대한 완전한 공개 보고서

 

Policy statements under consideration

To participate, teams should choose one or more questions from the provided list to showcase their proposed approach. They may also create their own questions if desired. Importantly, we encourage teams to consider questions for which a simple "yes" or "no" answer would be inadequate, necessitating a nuanced policy proposal instead.

 

참여하려면 팀은 제안된 접근 방식을 보여주기 위해 제공된 목록에서 하나 이상의 질문을 선택해야 합니다. 원하는 경우 자신만의 질문을 만들 수도 있습니다. 중요한 것은 팀이 단순한 "예" 또는 "아니오"로 대답하는 것이 부적절하여 미묘한 정책 제안이 필요한 질문을 고려하도록 권장합니다.

 

 

The scope of this grant pertains to policy questions concerning model behavior, as it enables A/B tests with modified model behavior according to the policy recommendations. We acknowledge the limitations of this grant and recognize that numerous AI issues could be addressed through the democratic process, extending beyond model behavior to include areas such as guidelines for the use of AI in various contexts, economic impact, distribution of benefits and more.

 

이 보조금의 범위는 정책 권장 사항에 따라 수정된 모델 동작으로 A/B 테스트를 활성화하므로 모델 동작과 관련된 정책 질문과 관련이 있습니다. 우리는 이 보조금의 한계를 인정하고 다양한 맥락에서 AI 사용 지침, 경제적 영향, 혜택 분배 등과 같은 영역을 포함하도록 모델 행동을 넘어 민주적 절차를 통해 수많은 AI 문제를 해결할 수 있음을 인식합니다.

 

  • How far do you think personalization of AI assistants like ChatGPT to align with a user's tastes and preferences should go? What boundaries, if any, should exist in this process?
  • 사용자의 취향과 선호도에 맞추기 위해 ChatGPT와 같은 AI 비서의 개인화는 어느 정도까지 이루어져야 한다고 생각하십니까? 이 프로세스에 존재해야 하는 경계는 무엇입니까?

 

  • How should AI assistants respond to questions about public figure viewpoints? E.g. Should they be neutral? Should they refuse to answer? Should they provide sources of some kind?
  • AI 비서는 공인의 관점에 대한 질문에 어떻게 응답해야 합니까? 예를 들어 중립적이어야 합니까? 답변을 거부해야 합니까? 그들은 어떤 종류의 출처를 제공해야 합니까?

 

  • Under what conditions, if any, should AI assistants be allowed to provide medical/financial/legal advice?
  • 어떤 조건에서 AI 비서가 의료/재무/법적 조언을 제공하도록 허용해야 합니까?

 

  • In which cases, if any, should AI assistants offer emotional support to individuals?
  • 어떤 경우에 AI 비서가 개인에게 정서적 지원을 제공해야 합니까?

 

  • Should joint vision-language models be permitted to identify people's gender, race, emotion, and identity/name from their images? Why or why not?
  • 이미지에서 사람들의 성별, 인종, 감정, 정체성/이름을 식별하기 위해 공동 시각 언어 모델을 허용해야 합니까? 그 이유는 무엇입니까?

 

  • When generative models create images for underspecified prompts like 'a CEO', 'a doctor', or 'a nurse', they have the potential to produce either diverse or homogeneous outputs. How should AI models balance these possibilities? What factors should be prioritized when deciding the depiction of people in such cases?
  • 생성 모델이 'CEO', '의사' 또는 '간호사'와 라고 따로 지정되지 않은 프롬프트에 대한 이미지를 생성할 때 다양하거나 동질적인 결과를 생성할 가능성이 둘 다 존재합니다. AI 모델은 이러한 가능성의 균형을 어떻게 맞춰야 할까요? 이러한 경우 사람의 묘사를 결정할 때 어떤 요소를 우선시해야 합니까?

 

  • What principles should guide AI when handling topics that involve both human rights and local cultural or legal differences, like LGBTQ rights and women’s rights? Should AI responses change based on the location or culture in which it’s used?
  • LGBTQ 권리 및 여성의 권리와 같이 인권과 지역 문화 또는 법적 차이가 모두 관련된 주제를 다룰 때 AI를 안내해야 하는 원칙은 무엇입니까? AI 응답은 사용되는 위치 또는 문화에 따라 변경되어야 합니까?

 

  • Which categories of content, if any, do you believe creators of AI models should focus on limiting or denying? What criteria should be used to determine these restrictions?
  • AI 모델 제작자가 제한 또는 거부에 중점을 두어야 한다고 생각하는 콘텐츠 카테고리는 무엇입니까? 이러한 제한 사항을 결정하기 위해 어떤 기준을 사용해야 합니까?

 

The primary objective of this grant is to foster innovation in processes – we need improved democratic methods to govern AI behavior. The specific answers to the questions matter less than the advancements made in the process itself.

 

이 보조금의 주요 목적은 프로세스의 혁신을 촉진하는 것입니다. 우리는 AI 행동을 통제하기 위해 개선된 민주적 방법이 필요합니다. 질문에 대한 구체적인 답변은 프로세스 자체의 발전보다 중요하지 않습니다.

 

Application advisory committee

 

Application review factors

  • Evaluation: We encourage participants to establish metrics for evaluating the quality of their methods, such as participant satisfaction, shifts in polarization, scalability, or other relevant indicators, and to invent new metrics for a healthy democratic process. 
  • 평가: 참가자가 참가자 만족도, 양극화의 변화, 확장성 또는 기타 관련 지표와 같은 방법의 품질을 평가하기 위한 메트릭을 설정하고 건전한 민주적 프로세스를 위한 새로운 메트릭을 설립하도록 권장합니다.

 

  • Robustness: Measures to prevent or address inappropriate behavior, such as trolling and fake accounts.
  • 견고성: 트롤링 및 가짜 계정과 같은 부적절한 행동을 방지하거나 해결하기 위한 조치입니다.

 

  • Inclusiveness and representativeness: Strategies for including individuals from diverse backgrounds and levels of familiarity with AI systems in the democratic process.
  • 포괄성 및 대표성: 다양한 배경과 AI 시스템에 대한 친숙도를 가진 개인을 민주적 프로세스에 포함시키는 전략.

 

  • Empowerment of Minority Opinions: Ensuring that unpopular or minority opinions are heard and providing smaller groups the opportunity to influence matters of significant concern to them.
  • 소수 의견의 권한 부여: 인기가 없거나 소수 의견을 경청하고 소규모 그룹에 중요한 문제에 영향을 미칠 수 있는 기회를 제공합니다.

 

  • Effective Moderation: Addressing challenges in moderation, including ensuring diverse representation of viewpoints, distinguishing valuable contributions from "off-topic" comments, and preventing moderator biases from influencing the process.
  • 효과적인 중재: 관점의 다양한 표현 보장, "주제에서 벗어난" 댓글에서 가치 있는 기여 구별, 중재자 편향이 프로세스에 영향을 미치지 않도록 방지하는 등 중재를 통해 문제를 해결합니다.

 

  • Scalability: We emphasize scalable processes that can be conducted virtually, rather than through in-person engagement. We are aware that this approach might sacrifice some benefits associated with in-person discussions, and we recognize that certain aspects could be lost in a virtual setting.
  • 확장성: 대면 참여보다는 가상으로 수행할 수 있는 확장 가능한 프로세스를 강조합니다. 우리는 이 접근 방식이 대면 토론과 관련된 일부 이점을 희생할 수 있다는 것을 알고 있으며 가상 환경에서 특정 측면이 손실될 수 있음을 알고 있습니다.

 

  • Actionability: The degree of actionability of the information elicited by the deliberation process.
  • 실행 가능성: 심의 과정에서 도출된 정보의 실행 가능성 정도.

 

  • Legibility: How easy it is to understand and trust the process.
  • 가독성: 프로세스를 이해하고 신뢰하는 것이 얼마나 쉽게 만들어 졌는지.

 

Footnotes

  1. How one selects the group of participants is a critical design question. Part of this grant challenge lies in determining questions about participation. For instance, policy questions involving minority groups may require an increased representation of group members, while questions about the impact of technology on children might necessitate the involvement of domain experts such as educators and psychologists. Moreover, certain questions might be better suited for responses from populations within specific geographical boundaries in order to address localized policy issues.

    참가자 그룹을 선택하는 방법은 중요한 설계 질문입니다. 이 보조금 문제의 일부는 참여에 대한 질문을 결정하는 데 있습니다. 예를 들어, 소수 집단과 관련된 정책 질문에는 그룹 구성원의 대표성이 높아질 수 있는 반면 기술이 어린이에게 미치는 영향에 대한 질문에는 교육자 및 심리학자와 같은 영역 전문가의 참여가 필요할 수 있습니다. 또한 특정 질문은 지역화된 정책 문제를 해결하기 위해 특정 지리적 경계 내에 있는 인구의 응답에 더 적합할 수 있습니다.↩︎

     

  2. Deliberation can be described as a process that uncovers opinions, helping the discussants understand each other's views and reconsider and update their viewpoints. Well-designed deliberation ensures that arguments are well understood by all sides, and are based on people's values rather than superficial misunderstandings. Successful deliberation results in participants reaching a higher level of consensus, and/or reaching deeper levels of understanding for differing perspectives.

    숙의는 토론자들이 서로의 관점을 이해하고 그들의 관점을 재고하고 업데이트하도록 도와주면서 의견을 밝히는 과정이라고 할 수 있습니다. 잘 설계된 심의는 주장이 모든 측면에서 잘 이해되도록 보장하고 피상적인 오해가 아닌 사람들의 가치에 기반합니다. 성공적인 숙의는 참가자들이 더 높은 수준의 합의에 도달하거나 다른 관점에 대해 더 깊은 수준의 이해에 도달하게 합니다.↩︎

     

  3. There are many decision-making algorithms to be considered here, such as electing representatives, majority voting, employing liquid democracy, and making decisions by a random population sample, also known as a jury or sortition.

     

    여기에는 대표자 선출, 다수결 투표, 액체 민주주의 채택, 배심원 또는 분류라고도 하는 무작위 인구 표본에 의한 결정 등 많은 의사 결정 알고리즘이 고려됩니다.↩︎

     

 

Democratic Inputs to AI

Our nonprofit organization, OpenAI, Inc., is launching a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow, within the bounds defined by the law.

openai.com

 

Authors

 

Acknowledgments

 

Ariel Procaccia, Aviv Ovadya, Colin Megill, David Medina, Divya Siddarth, Ela Madej, Elizabeth Seger, Gillian Hadfield, Greg Brockman, Hélène Landemore, Ilya Sutskever, Justin Rosenstein, Margaret Levi, Michiel Bakker, Miles Brundage, Mira Murati, Noel Bundick, Pamela Mishkin, Ryan Lowe, Saffron Huang, Sam Altman, Sandhini Agarwal, Teddy Lee

반응형
이전 1 다음