Testing in Visual Studio

In a collaboration with TestRoots, we extracted test information from our enriched event stream and provide them in a compatible format for their WatchDog tooling to allow a further analyses of testing behavior in Visual Studio.

Previous work by Beller et al. analyzed how Java developers test. They applied WatchDog in the Java IDEs Eclipse and IntelliJ. The experiments in their paper were based on intervals of several activities (i.e., IDE open, active periods of the developer, reading and typing in a file, test execution). The original release of our interaction tracker, FeedBaG, has captured any commands initiated by the developer so all of these activities were already included in our event stream. However, for test executions, we have only captured that an execution was initiated, but no further details about the individual tests. The enriched event streams captured by the extended release, FeedBaG++, provide an opportunity to extend the study to Visual Studio though. In the extended release, we added an instrumentation of the ReSharper test runner. It captures the names of each executed test, as well as duration and result of the run. We then designed a test event data structure to store the relevant information.

A technical difference is that FeedBaG++ captures (and uploads) a fine-grained event stream, whereas WatchDog lifts this stream to intervals on the client side and only uploads the resulting intervals. Intervals capture when and for how long an activity took place. We implemented an offline conversion in CARET from enriched event streams to the intervals described in their paper to make our enriched event stream compatible with WatchDog. The original authors confirmed that the created intervals are fully compatible and useable to run the experiments in their pipeline.

The logic behind maintaining the intervals is complex and depends on various events that occur in the IDE. We have created an extensive test suite as a means to clearly communicate the expectations with the WatchDog team. However, the events captured in real deployments do not always arrive in a deterministic order and it is hard to automatically write unit tests for all cases. To allow debugging the interval export and to enable early spotting of errors, we have implemented a visualization tool for the interval creation that created the following visualization.

Please note, that testing related events have not yet been collected in the industrial FeedBaG deployments. However, the dataset collected in the field study contains them, which allowed us to extract testing information for Visual Studio that we could provide to the WatchDog team. While this project is still an on-going collaboration, it provides an indication of the research possibilities that enriched event streams open up. It shows that having FeedBaG++ made it easy to extend WatchDog to a new IDE. It also shows that enriched event streams already contain a wide range of context information and that new generators can be added to capture more.