반응형
블로그 이미지
개발자로서 현장에서 일하면서 새로 접하는 기술들이나 알게된 정보 등을 정리하기 위한 블로그입니다. 운 좋게 미국에서 큰 회사들의 프로젝트에서 컬설턴트로 일하고 있어서 새로운 기술들을 접할 기회가 많이 있습니다. 미국의 IT 프로젝트에서 사용되는 툴들에 대해 많은 분들과 정보를 공유하고 싶습니다.
솔웅

최근에 올라온 글

최근에 달린 댓글

최근에 받은 트랙백

글 보관함

카테고리

Jun 1, 2023 - OpenAI cybersecurity grant program

2023. 6. 3. 05:04 | Posted by 솔웅


반응형

https://openai.com/blog/openai-cybersecurity-grant-program

 

OpenAI cybersecurity grant program

Our goal is to facilitate the development of AI-powered cybersecurity capabilities for defenders through grants and other support.

openai.com

 

OpenAI cybersecurity grant program

Our goal is to facilitate the development of AI-powered cybersecurity capabilities for defenders through grants and other support.

우리의 목표는 보조금 및 기타 지원을 통해 방어자를 위한 AI 기반 사이버 보안 기능 개발을 촉진하는 것입니다.

 

Authors

 

We are launching the Cybersecurity Grant Program—a $1M initiative to boost and quantify AI-powered cybersecurity capabilities and to foster high-level AI and cybersecurity discourse.

 

우리는 AI 기반 사이버 보안 기능을 강화 및 정량화하고 높은 수준의 AI 및 사이버 보안 담론을 촉진하기 위한 100만 달러 이니셔티브인 사이버 보안 보조금 프로그램을 시작합니다.

 

Our goal is to work with defenders across the globe to change the power dynamics of cybersecurity through the application of AI and the coordination of like-minded individuals working for our collective safety.

 

우리의 목표는 AI를 적용하고 집단 안전을 위해 일하는 같은 생각을 가진 개인의 조정을 통해 사이버 보안의 역학을 변화시키기 위해 전 세계 수비수와 협력하는 것입니다.

 

Our program seeks to:  우리 프로그램은 다음을 추구합니다.

  1. Empower defenders: We would like to ensure that cutting-edge AI capabilities benefit defenders first and most.
    수비수 역량 강화: 우리는 최첨단 AI 기능이 수비수에게 가장 먼저 그리고 가장 많은 혜택을 주길 원합니다.

  2. Measure capabilities: We are working to develop methods for quantifying the cybersecurity capabilities of AI models, in order to better understand and improve their effectiveness.
    역량 측정: 우리는 AI 모델의 사이버 보안 역량을 더 잘 이해하고 효율성을 개선하기 위해 수치화하는 방법을 개발하기 위해 노력하고 있습니다.

  3. Elevate discourse: We are dedicated to fostering rigorous discussions at the intersection of AI and cybersecurity, encouraging a comprehensive and nuanced understanding of the challenges and opportunities in this domain.
    담론 향상: 우리는 AI와 사이버 보안의 교차점에서 엄격한 토론을 촉진하고 이 영역의 도전과 기회에 대한 포괄적이고 미묘한 이해를 장려하는 데 전념하고 있습니다.

A traditional view in cybersecurity is that the landscape naturally advantages attackers over defenders. This is summed up in the well-worn axiom: “Defense must be correct 100% of the time, attackers only have to be right once.” While it may be true that attackers face fewer constraints and take advantage of their flexibility, defenders have something more valuable - coordination towards a common goal of keeping people safe.

 

사이버 보안에 대한 전통적인 관점은 환경이 자연스럽게 방어자보다 공격자에게 유리하다는 것입니다. 이것은 잘 알려진 격언으로 요약됩니다. "수비는 항상 100% 정확해야 하며 공격자는 한 번만 정확하면 됩니다." 공격자가 더 적은 제약에 직면하고 유연성을 활용하는 것이 사실일 수 있지만, 방어자는 사람들을 안전하게 보호한다는 공통 목표를 향한 조정이라는 더 가치 있는 것을 가지고 있습니다.

 

Below are some general project ideas that our team has put forward:

 

다음은 우리 팀이 제안한 몇 가지 일반적인 프로젝트 아이디어입니다.

 

  • Collect and label data from cyber defenders to train defensive cybersecurity agents
  • 사이버 방어자로부터 데이터를 수집하고 레이블을 지정하여 방어적인 사이버 보안 에이전트를 교육합니다.
  • Detect and mitigate social engineering tactics
  • 사회 공학 전술 탐지 및 완화
  • Automate incident triage 
  • 인시던트 분류 자동화
  • Identify security issues in source code
  • 소스 코드의 보안 문제 식별
  • Assist network or device forensics
  • 네트워크 또는 장치 포렌식 지원
  • Automatically patch vulnerabilities
  • 취약점 자동 패치
  • Optimize patch management processes to improve prioritization, scheduling, and deployment of security updates
  • 패치 관리 프로세스를 최적화하여 보안 업데이트의 우선 순위 지정, 예약 및 배포를 개선합니다.
  • Develop or improve confidential compute on GPUs
  • GPU에서 기밀 컴퓨팅 개발 또는 개선
  • Create honeypots and deception technology to misdirect or trap attackers
  • 공격자를 오도하거나 함정에 빠뜨리기 위한 허니팟 및 속임수 기술 생성
  • Assist reverse engineers in creating signatures and behavior based detections of malware
  • 리버스 엔지니어가 맬웨어의 서명 및 동작 기반 탐지를 생성하도록 지원
  • Analyze an organization’s security controls and compare to compliance regimes
  • 조직의 보안 제어를 분석하고 규정 준수 체계와 비교
  • Assist developers to create secure by design and secure by default software
  • 개발자가 안전하게 설계되고 기본적으로 안전한 소프트웨어를 만들 수 있도록 지원
  • Assist end users to adopt security best practices
  • 최종 사용자가 보안 모범 사례를 채택하도록 지원
  • Aid security engineers and developers to create robust threat models
  • 보안 엔지니어와 개발자가 강력한 위협 모델을 생성하도록 지원
  • Produce threat intelligence with salient and relevant information for defenders tailored to their organization
  • 조직에 맞는 방어자를 위한 중요하고 관련성 있는 정보로 위협 인텔리전스를 생성합니다.
  • Help developers port code to memory safe languages
  • 개발자가 코드를 메모리 안전 언어로 포팅하도록 지원

 

Apply now!

If you share our vision for a secure and innovative AI-driven future, we invite you to submit your proposals and join us in our aim towards enhancing defensive cybersecurity technologies.

안전하고 혁신적인 AI 기반 미래에 대한 우리의 비전을 공유한다면 제안서를 제출하고 방어적인 사이버 보안 기술 향상을 위한 우리의 목표에 동참하도록 초대합니다.

 

OpenAI will evaluate and accept applications for funding or other support on a rolling basis. Strong preference will be given to practical applications of AI in defensive cybersecurity (tools, methods, processes). We will grant in increments of $10,000 USD from a fund of $1M USD, in the form of API credits, direct funding and/or equivalents.

 

OpenAI는 자금 지원 또는 기타 지원 신청을 수시로 평가하고 수락합니다. 방어적인 사이버 보안(도구, 방법, 프로세스)에서 AI의 실제 적용에 강력한 선호도가 주어질 것입니다. API 크레딧, 직접 자금 지원 및/또는 이에 상응하는 형태로 미화 100만 달러의 기금에서 미화 10,000달러 단위로 보조금을 지급합니다.

 

Offensive-security projects will not be considered for funding at this time.

 

공격 보안 프로젝트는 현재 자금 지원 대상으로 고려되지 않습니다.

 

All projects should be intended to be licensed or distributed for maximal public benefit and sharing, and we will prioritize applications that have a clear plan for this. 

 

모든 프로젝트는 최대한의 공익과 공유를 위해 라이선스를 부여하거나 배포해야 하며 이에 대한 명확한 계획이 있는 애플리케이션을 우선적으로 처리할 것입니다.

 

Please submit your proposal here.

 

here에 제안서를 제출하십시오.

 

 

 

 

 

 

 

 

 

반응형