Applications‎ > ‎

Realistic Evaluation

We used our recorded fine-grained source-code history to analyze the representativeness of existing artificial evaluations of RSSE tools (ASE16). We are currently working on providing a reusable evaluation benchmark based on our public datasets that can be used by others to create reusable and comparable RSSE experimental setups.

TODO: flesh out contents