Event

Location: NDSSL Conference Room 2018, RB XV, Corporate Research Center

Graduate Students and NDSSL Faculty conduct weekly seminars on a variety of topics related to the research being conducted in the lab.

Speaker 1: Maureen Lawrence-Kuether

Title: Elements of Effective Research Communication

Abstract: This talk will provide an overview of strategies researchers can use to better communicate their work. Typically, scientific training leaves researchers ill prepared for effective communication outside of academia. Funding agencies and research institutions are increasingly encouraging researchers to communicate their results directly to the public. Whether you choose an academic path of grant writing and peer-review or work to affect public policy or even head over to industry, strong communication skills will make you a successful professional.  We will discuss writing tips, presentation and public speaking skills, as well as other methods you can utilize to get your message out effectively. A good scientist can perform great work; a great scientist can explain why that work matters.

Speaker 2: Zalia Shams

Title: Automated Assessment of Student-written Tests Based on Defect-detection Capability

Abstract: Software testing is important, but judging whether a set of software tests are effective is hard. The same problem also appears in the classroom as educators frequently include software testing activities in assignments. While tests can be hand-graded, some educators use objective performance metrics to assess software tests, just as professionals do. The most common measures used at present are code coverage measures— tracking how much of the student’s code (in terms of statements, branches, or some combination) is exercised by the corresponding software tests. Code coverage has limitations as it does not assess whether computational results from the executed code are checked against expectations, and sometimes it overestimates the true quality of the tests. We alternatively evaluate students’ tests on how many defects the tests can detect from injected errors—mutation testing— and actual errors present is others code—all-pairs testing. We overcome a number of technical challenges to apply these two approaches in classroom assessment systems. Afterwards, we compare all three methods—all-pairs testing, mutation testing and code coverage—in terms of how well they predict defect detection capabilities of student-written tests when run against a large collection of known, authentic, human-written errors. Experimental results encompassing over 700,000 test runs show that all-pairs testing is the most effective predictor of the underlying defect revealing capability of a test suite. Further, no strong correlation was found between defect revealing capability and code coverage. Investigating effectiveness of student written tests we find that, students are mainly “happy path” testers – purpose of their writing tests is to show that their code “worked” rather than finding real errors.