반응형
블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

카테고리


반응형

https://openai.com/blog/how-openai-is-approaching-2024-worldwide-elections

 

How OpenAI is approaching 2024 worldwide elections

We’re working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information.

openai.com

 

How OpenAI is approaching 2024 worldwide elections

We’re working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information.

 

 

 

Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process. 

 

선거의 무결성을 보호하려면 민주적 절차의 모든 부분에서 협력이 필요하며, 우리는 우리의 기술이 이 절차를 훼손할 수 있는 방식으로 사용되지 않도록 하고 싶습니다.

 

Our tools empower people to improve their daily lives and solve complex problems  – from using AI to enhance state services to simplifying medical forms for patients.

 

우리의 도구는 AI를 사용하여 국가 서비스를 강화하는 것부터 환자를 위한 의료 양식을 단순화하는 것까지 사람들이 일상 생활을 개선하고 복잡한 문제를 해결할 수 있도록 지원합니다.

 

We want to make sure that our AI systems are built, deployed, and used safely. Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.

 

우리는 AI 시스템이 안전하게 구축, 배포 및 사용되기를 원합니다. 다른 새로운 기술과 마찬가지로 이러한 도구에도 이점과 과제가 있습니다. 이는 또한 전례 없는 일이며, 도구 사용 방법에 대해 더 많이 배우면서 접근 방식을 계속 발전시킬 것입니다.

 

As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency. We have a cross-functional effort dedicated to election work, bringing together expertise from our safety systems, threat intelligence, legal, engineering, and policy teams to quickly investigate and address potential abuse. 

 

우리는 세계 최대 민주주의 국가의 2024년 선거를 준비하면서 정확한 투표 정보를 높이고, 신중한 정책을 시행하고, 투명성을 개선하여 플랫폼 안전 작업을 계속하는 것입니다. 우리는 안전 시스템, 위협 인텔리전스, 법률, 엔지니어링 및 정책 팀의 전문 지식을 모아 잠재적인 남용을 신속하게 조사하고 해결하기 위해 선거 업무에 전념하는 다기능적 노력을 기울이고 있습니다.

 

The following are key initiatives our teams are investing in to prepare for elections this year:

 

다음은 올해 선거를 준비하기 위해 우리 팀이 투자하고 있는 주요 이니셔티브입니다.

 

Preventing abuse

We expect and aim for people to use our tools safely and responsibly, and elections are no different. We work to anticipate and prevent relevant abuse—such as misleading “deepfakes”, scaled influence operations, or chatbots impersonating candidates. Prior to releasing new systems, we red team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm. For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests. These tools provide a strong foundation for our work around election integrity. For instance, DALL·E has guardrails to decline requests that ask for image generation of real people, including candidates.

 

우리는 사람들이 우리의 도구를 안전하고 책임감 있게 사용하기를 기대하고 목표하며, 선거도 이와 다르지 않습니다. 우리는 오해를 불러일으키는 "딥페이크", 대규모 영향력 행사 또는 후보자를 사칭하는 챗봇과 같은 관련 남용을 예측하고 예방하기 위해 노력합니다. 새로운 시스템을 출시하기 전에 우리는 레드팀을 구성하고 피드백을 받기 위해 사용자와 외부 파트너를 참여시키며 피해 가능성을 줄이기 위한 안전 완화 조치를 구축합니다. 수년 동안 우리는 사실의 정확성을 높이고, 편견을 줄이고, 특정 요청을 거부하는 도구를 반복적으로 사용해 왔습니다. 이러한 도구는 선거 무결성에 관한 작업을 위한 강력한 기반을 제공합니다. 예를 들어 DALL·E에는 후보자를 포함한 실제 인물의 이미지 생성을 요청하는 요청을 거부하는 가드레일이 있습니다.

 

We regularly refine our Usage Policies for ChatGPT and the API as we learn more about how people use or attempt to abuse our technology. A few to highlight for elections: 

 

우리는 사람들이 우리 기술을 어떻게 사용하거나 남용하려고 시도하는지 자세히 파악하면서 ChatGPT 및 API에 대한 사용 정책을 정기적으로 개선합니다. 선거를 위해 강조할 몇 가지:

 

  • We’re still working to understand how effective our tools might be for personalized persuasion. Until we know more, we don’t allow people to build applications for political campaigning and lobbying. 

  • 우리는 우리의 도구가 개인화된 설득에 얼마나 효과적인지 이해하기 위해 계속 노력하고 있습니다. 더 많은 내용을 알기 전까지는 사람들이 정치 캠페인 및 로비 활동을 위한 애플리케이션을 구축하는 것을 허용하지 않습니다.

  • People want to know and trust that they are interacting with a real person, business, or government. For that reason, we don’t allow builders to create chatbots that pretend to be real people (e.g., candidates) or institutions (e.g., local government). 

  • 사람들은 자신이 실제 사람, 기업 또는 정부와 상호 작용하고 있다는 사실을 알고 신뢰하고 싶어합니다. 이러한 이유로 우리는 빌더가 실제 사람(예: 후보자) 또는 기관(예: 지방 정부)인 것처럼 가장하는 챗봇을 만드는 것을 허용하지 않습니다.

  • We don’t allow applications that deter people from participation in democratic processes—for example, misrepresenting voting processes and qualifications (e.g., when, where, or who is eligible to vote) or that discourage voting (e.g., claiming a vote is meaningless).

  • 사람들이 민주적 절차에 참여하는 것을 방해하는 애플리케이션(예: 투표 절차 및 자격(예: 언제, 어디서, 누가 투표할 자격이 있는지)을 허위로 표시하거나 투표를 방해하는 애플리케이션(예: 투표가 무의미하다고 주장))은 허용되지 않습니다. .

  • With our new GPTs, users can report potential violations to us.

  • 새로운 GPT를 통해 사용자는 잠재적인 위반 사항을 신고할 수 있습니다.
 

 

 

Transparency around AI-generated content

 

Better transparency around image provenance—including the ability to detect which tools were used to produce an image—can empower voters to assess an image with trust and confidence in how it was made. We’re working on several provenance efforts. Early this year, we will implement the Coalition for Content Provenance and Authenticity’s digital credentials—an approach that encodes details about the content’s provenance using cryptography—for images generated by DALL·E 3. 

 

이미지를 생성하는 데 어떤 도구가 사용되었는지 감지하는 기능을 포함하여 이미지 출처에 대한 투명성이 향상되면 유권자는 이미지 제작 방법에 대한 신뢰와 자신감을 가지고 이미지를 평가할 수 있습니다. 우리는 여러 출처에 대한 노력을 기울이고 있습니다. 올해 초 우리는 DALL·E 3에서 생성된 이미지에 대해 암호화를 사용하여 콘텐츠 출처에 대한 세부 정보를 인코딩하는 접근 방식인 콘텐츠 출처 및 진위성 연합의 디지털 자격 증명을 구현할 예정입니다.

 

We are also experimenting with a provenance classifier, a new tool for detecting images generated by DALL·E. Our internal testing has shown promising early results, even where images have been subject to common types of modifications. We plan to soon make it available to our first group of testers—including journalists, platforms, and researchers—for feedback.

 

또한 DALL·E에서 생성된 이미지를 감지하는 새로운 도구인 출처 분류기를 실험하고 있습니다. 우리의 내부 테스트는 이미지가 일반적인 유형의 수정을 받은 경우에도 유망한 초기 결과를 보여주었습니다. 우리는 곧 언론인, 플랫폼, 연구원을 포함한 첫 번째 테스터 그룹이 피드백을 받을 수 있도록 할 계획입니다.

 

Finally, ChatGPT is increasingly integrating with existing sources of information—for example, users will start to get access to real-time news reporting globally, including attribution and links. Transparency around the origin of information and balance in news sources can help voters better assess information and decide for themselves what they can trust.

 

마지막으로 ChatGPT는 점점 더 기존 정보 소스와 통합되고 있습니다. 예를 들어 사용자는 속성 및 링크를 포함하여 전 세계적으로 실시간 뉴스 보고에 액세스할 수 있게 됩니다. 정보 출처에 대한 투명성과 뉴스 소스의 균형은 유권자가 정보를 더 잘 평가하고 신뢰할 수 있는 정보를 스스로 결정하는 데 도움이 될 수 있습니다.

 

Improving access to authoritative voting information

 

In the United States, we are working with the National Association of Secretaries of State (NASS), the nation's oldest nonpartisan professional organization for public officials. ChatGPT will direct users to CanIVote.org, the authoritative website on US voting information, when asked certain procedural election related questions—for example, where to vote. Lessons from this work will inform our approach in other countries and regions. 

 

미국에서는 미국에서 가장 오래된 공직자를 위한 초당파적 전문 조직인 전국 국무장관 협회(NASS)와 협력하고 있습니다. ChatGPT는 특정 절차적 선거 관련 질문(예: 투표 장소)을 묻는 경우 미국 투표 정보에 대한 권위 있는 웹사이트인 CanIVote.org로 사용자를 안내합니다. 이 작업에서 얻은 교훈은 다른 국가 및 지역에서의 우리의 접근 방식에 영향을 미칠 것입니다.

 

We’ll have more to share in the coming months. We look forward to continuing to work with and learn from partners to anticipate and prevent potential abuse of our tools in the lead up to this year’s global elections.

 

앞으로 몇 달 동안 더 많은 내용을 공유할 예정입니다. 우리는 올해 세계 선거를 앞두고 우리 도구의 남용 가능성을 예측하고 방지하기 위해 계속해서 파트너와 협력하고 파트너로부터 배우기를 기대합니다.

 

반응형