The Preload Script¶
Overview¶
The preload script is an important script that loads necessary R packages and functions before running the student's code. It contains the code to override some functions to help students conform their submissions to the requirements.
The preload script has a large amount of flexibility in terms of what you can do. Here we will break down into parts to explain the key components of the default preload script provided with the Example Assignment. We will also provide additional examples on implementing custom evaluation functions, too.
The following part of the preload script is to prevent students from using
disallowed packages and functions. If students are only allowed to use the base package,
you can use the following code to detach all non-base packages.
Detaching Disallowed Packages¶
If you want to allow students to use stats packages, you may want to modify the last line to the following:
Preventing Users to Load Additional Packages¶
The following code snipplet prevents students from
loading additional packages using library() or require() functions.
Preventing Disallowed Patterns of Use¶
Even if disallowed packages are detached, the students may still try to access the functions
using :: operator. For example, stats::lm() will still work even if the stats package is detached.
To prevent such a loophole, we can scan a patterns of use of :: or ::: operators in the submission file
before loading them. To do this, we can redefine the source() function in R as follows.
The above code can be extended to detect more specific patterns.
For example, if certain expressions are required or disallowed in the student's submission, they can be added
to the source() function above.
Fixing Random Seed¶
For problems that require random number generation, you may want to fix the random seed to make the results reproducible, and prevent students from using arbitrary random seeds.
Custom Evaluation Function¶
Typically, the function students are asked to implement returns the output values that can be evaluated directly. However, sometimes you may want to evaluate the output values in a more specific way. In such cases, you can define a custom evaluation function in the preload script.
For example, suppose that students are asked to implement a function predict(data, coef) that returns the predicted values. You may want to return the mean squared error (MSE) between the true values and the predicted values. In this case, you can define a custom evaluation function as follows:
In this case, the configuration file should have evaluate_predict as the function name, and
predict as the file name, asking students to submit predict.R that contains the predict() function as follows:
Custom Scoring Function¶
By default, the autograder runs the solution and student's code, and compares the output values to determine the score. However, sometimes you may want to evaluate the output values in a more specific way and assign a score based on the evaluation. In such cases, you can define a custom scoring function in the preload script.
NOTE that --skip-solution option must be turned on in the run_autograder script to skip running the solution code.
In order to use this feature, your custom evaluation function should return a list that contains the following attributes:
score: the score for the test casedetails: the output message to be shown to the students.
For example, in the example above, you may want to score the test case based on the MSE between the true values and the predicted values quantitatively. In this case, you can define a custom scoring function as follows:
If the score is not 0 to 1 scale, you will need to modify the config/config.prob.yaml file to include the maxscore field for each test case as follows: