Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SWE-bench: Can Language Models Resolve Real-World GitHub Issues? #758

Open
1 task
irthomasthomas opened this issue Mar 16, 2024 · 1 comment
Open
1 task
Labels
AI-Agents Autonomous AI agents using LLMs code-generation code generation models and tools like copilot and aider dataset public datasets and embeddings human-verified <INST>NEVER PICK THIS LABEL</INST> llm Large Language Models llm-benchmarks testing and benchmarking large language models llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets MachineLearning ML Models, Training and Inference Models LLM and ML model repos and links Papers Research papers software-engineering Best practice for software engineering

Comments

@irthomasthomas
Copy link
Owner

SWE-bench README

Kawi the SWE-Llama


Code and data for our ICLR 2024 paper [SWE-bench: Can Language Models Resolve Real-World GitHub Issues?](http://swe-bench.github.io/paper.pdf)

Build License

Please refer our website for the public leaderboard and the change log for information on the latest updates to the SWE-bench benchmark.

Overview

SWE-bench is a benchmark for evaluating large language models on real world software issues collected from GitHub. Given a codebase and an issue, a language model is tasked with generating a patch that resolves the described problem.

teaser

Set Up

To build SWE-bench from source, follow these steps:

  1. Clone this repository locally
  2. cd into the repository.
  3. Run conda env create -f environment.yml to created a conda environment named swe-bench
  4. Activate the environment with conda activate swe-bench

Usage

You can download the SWE-bench dataset directly (dev, test sets) or from HuggingFace.

To use SWE-Bench, you can:

  • Train your own models on our pre-processed datasets
  • Run inference on existing models (either models you have on-disk like LLaMA, or models you have access to through an API like GPT-4). The inference step is where you get a repo and an issue and have the model try to generate a fix for it.
  • Evaluate models against SWE-bench. This is where you take a SWE-Bench task and a model-proposed solution and evaluate its correctness.
  • Run SWE-bench's data collection procedure on your own repositories, to make new SWE-Bench tasks.

Downloads

Datasets Models
🤗 SWE-bench 🦙 SWE-Llama 13b
🤗 "Oracle" Retrieval 🦙 SWE-Llama 13b (PEFT)
🤗 BM25 Retrieval 13K 🦙 SWE-Llama 7b
🤗 BM25 Retrieval 27K 🦙 SWE-Llama 7b (PEFT)
🤗 BM25 Retrieval 40K
🤗 BM25 Retrieval 50K (Llama tokens)

Tutorials

We've also written the following blog posts on how to use different parts of SWE-bench. If you'd like to see a post about a particular topic, please let us know via an issue.

  • [Nov 1. 2023] Collecting Evaluation Tasks for SWE-Bench (🔗)
  • [Nov 6. 2023] Evaluating on SWE-bench (🔗)

Contributions

We would love to hear from the broader NLP, Machine Learning, and Software Engineering research communities, and we welcome any contributions, pull requests, or issues! To do so, please either file a new pull request or issue and fill in the corresponding templates accordingly. We'll be sure to follow up shortly!

Contact person: Carlos E. Jimenez and John Yang (Email: {carlosej, jy1682}@princeton.edu).

Citation

If you find our work helpful, please use the following citations.

@inproceedings{
    jimenez2024swebench,
    title={{SWE}-bench: Can Language Models Resolve Real-world Github Issues?},
    author={Carlos E Jimenez and John Yang and Alexander Wettig and Shunyu Yao and Kexin Pei and Ofir Press and Karthik R Narasimhan},
    booktitle={The Twelfth International Conference on Learning Representations},
    year={2024},
    url={https://openreview.net/forum?id=VTF8yNQM66}
}

License

MIT. Check LICENSE.md.

View on GitHub

Suggested labels

@irthomasthomas irthomasthomas added dataset public datasets and embeddings Models LLM and ML model repos and links Papers Research papers labels Mar 16, 2024
@irthomasthomas
Copy link
Owner Author

Related content

#494

Similarity score: 0.88

#750

Similarity score: 0.88

#627

Similarity score: 0.87

#649

Similarity score: 0.87

#749

Similarity score: 0.87

#684

Similarity score: 0.87

@irthomasthomas irthomasthomas added llm Large Language Models AI-Agents Autonomous AI agents using LLMs MachineLearning ML Models, Training and Inference llm-benchmarks testing and benchmarking large language models llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets code-generation code generation models and tools like copilot and aider software-engineering Best practice for software engineering human-verified <INST>NEVER PICK THIS LABEL</INST> labels Aug 22, 2024
@irthomasthomas irthomasthomas changed the title SWE-bench/README.md at main · princeton-nlp/SWE-bench SWE-bench: Can Language Models Resolve Real-World GitHub Issues? Aug 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AI-Agents Autonomous AI agents using LLMs code-generation code generation models and tools like copilot and aider dataset public datasets and embeddings human-verified <INST>NEVER PICK THIS LABEL</INST> llm Large Language Models llm-benchmarks testing and benchmarking large language models llm-evaluation Evaluating Large Language Models performance and behavior through human-written evaluation sets MachineLearning ML Models, Training and Inference Models LLM and ML model repos and links Papers Research papers software-engineering Best practice for software engineering
Projects
None yet
Development

No branches or pull requests

1 participant