Abstract

To keep up with the current spread of education, there has arisen the need to have automated tools to evaluate assignments. As a part of this thesis, we have developed a technique to evaluate assignments on regular expressions (regexes). Every student is different and so is their solution, thus making it hard to have a single approach to grade it all. Hence, in addition to the existing techniques, we offer a new way of evaluating regexes. We call this the regex edit distance. The idea behind this is to find the minimal changes that we could make in a wrong answer to make its language equivalent to that of a correct answer. This approach is along the lines of the one used by Automata Tutor to grade DFAs. We also spoke to different graders and observed that they were in some sense computing the regex edit distance to assign partial credit.

Computing the regex edit distance is a PSPACE-hard problem and seems computationally intractable even for college level submissions. To deal with this intractability, we look at a simpler version of regex edit distance that can be computed for many college level submissions. We hypothesize that our version of regex edit distance is a good metric for evaluating and awarding partial credit for regexes. We ran an initial study and we observed a strong relation between the partial credit awarded and our version of regex edit distance.

Library of Congress Subject Headings

Computer programming--Study and teaching; Grading and marking (Students)--Data processing

Publication Date

1-2017

Document Type

Thesis

Student Type

Graduate

Degree Name

Computer Science (MS)

Department, Program, or Center

Computer Science (GCCIS)

Advisor

Edith Hemaspaandra

Advisor/Committee Member

Ivona Bezakova

Advisor/Committee Member

Zack Butler

Comments

Physical copy available from RIT's Wallace Library at QA76.6 .K35 2017

Campus

RIT – Main Campus

Plan Codes

COMPSCI-MS

Share

COinS