Reference¶
Note
check_
functions typically ‘dive’ deeper into a part of the state it was passed. They are typically chained for further checking.has_
functions always return the state that they were intially passed and are used at the ‘end’ of a chain.
Objects¶
-
check_object
(state, index, missing_msg=None, expand_msg=None, typestr='variable')¶ Check object existence (and equality)
Check whether an object is defined in the student’s process, and zoom in on its value in both student and solution process to inspect quality (with has_equal_value().
In
pythonbackend
, both the student’s submission as well as the solution code are executed, in separate processes.check_object()
looks at these processes and checks if the referenced object is available in the student process. Next, you can usehas_equal_value()
to check whether the objects in the student and solution process correspond.Parameters: - index (str) – the name of the object which value has to be checked.
- missing_msg (str) – feedback message when the object is not defined in the student process.
- expand_msg (str) – If specified, this overrides any messages that are prepended by previous SCT chains.
Example: Suppose you want the student to create a variable
x
, equal to 15:x = 15
The following SCT will verify this:
Ex().check_object("x").has_equal_value()
check_object()
will check if the variablex
is defined in the student process.has_equal_value()
will check whether the value ofx
in the solution process is the same as in the student process.
Note that
has_equal_value()
only looks at end result of a variable in the student process. In the example, how the objectx
came about in the student’s submission, does not matter. This means that all of the following submission will also pass the above SCT:x = 15 x = 12 + 3 x = 3; x += 12
Example: As the previous example mentioned,
has_equal_value()
only looks at the end result. If your exercise is first initializing and object and further down the script is updating the object, you can only look at the final value!Suppose you want the student to initialize and populate a list my_list as follows:
my_list = [] for i in range(20): if i % 3 == 0: my_list.append(i)
There is no robust way to verify whether my_list = [0] was coded correctly in a separate way. The best SCT would look something like this:
msg = "Have you correctly initialized `my_list`?" Ex().check_correct( check_object('my_list').has_equal_value(), multi( # check initialization: [] or list() check_or( has_equal_ast(code = "[]", incorrect_msg = msg), check_function('list') ), check_for_loop().multi( check_iter().has_equal_value(), check_body().check_if_else().multi( check_test().multi( set_context(2).has_equal_value(), set_context(3).has_equal_value() ), check_body().set_context(3).\ set_env(my_list = [0]).\ has_equal_value(name = 'my_list') ) ) ) )
check_correct()
is used to robustly check whethermy_list
was built correctly.- If
my_list
is not correct, both the initialization and the population code are checked.
Example: Because checking object correctness incorrectly is such a common misconception, we’re adding another example:
import pandas as pd df = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]}) df['c'] = [7, 8, 9]
The following SCT would be wrong, as it does not factor in the possibility that the ‘add column
c
’ step could’ve been wrong:Ex().check_correct( check_object('df').has_equal_value(), check_function('pandas.DataFrame').check_args(0).has_equal_value() )
The following SCT would be better, as it is specific to the steps:
# verify the df = pd.DataFrame(...) step Ex().check_correct( check_df('df').multi( check_keys('a').has_equal_value(), check_keys('b').has_equal_value() ), check_function('pandas.DataFrame').check_args(0).has_equal_value() ) # verify the df['c'] = [...] step Ex().check_df('df').check_keys('c').has_equal_value()
Example: pythonwhat compares the objects in the student and solution process with the
==
operator. For basic objects, this==
is operator is properly implemented, so that the objects can be effectively compared. For more complex objects that are produced by third-party packages, however, it’s possible that this equality operator is not implemented in a way you’d expect. Often, for these object types the==
will compare the actual object instances:# pre exercise code class Number(): def __init__(self, n): self.n = n # solution x = Number(1) # sct that won't work Ex().check_object().has_equal_value() # sct Ex().check_object().has_equal_value(expr_code = 'x.n') # submissions that will pass this sct x = Number(1) x = Number(2 - 1)
The basic SCT like in the previous example will notwork here. Notice how we used the
expr_code
argument to _override_ which value has_equal_value() is checking. Instead of checking whether x corresponds between student and solution process, it’s now executing the expressionx.n
and seeing if the result of running this expression in both student and solution process match.
-
is_instance
(state, inst, not_instance_msg=None)¶ Check whether an object is an instance of a certain class.
is_instance()
can currently only be used when chained fromcheck_object()
, the function that is used to ‘zoom in’ on the object of interest.Parameters: - inst (class) – The class that the object should have.
- not_instance_msg (str) – When specified, this overrides the automatically generated message in case the object does not have the expected class.
- state (State) – The state that is passed in through the SCT chain (don’t specify this).
Example: Student code and solution code:
import numpy as np arr = np.array([1, 2, 3, 4, 5])
SCT:
# Verify the class of arr import numpy Ex().check_object('arr').is_instance(numpy.ndarray)
-
check_df
(state, index, missing_msg=None, not_instance_msg=None, expand_msg=None)¶ Check whether a DataFrame was defined and it is the right type
check_df()
is a combo ofcheck_object()
andis_instance()
that checks whether the specified object exists and whether the specified object is pandas DataFrame.You can continue checking the data frame with
check_keys()
function to ‘zoom in’ on a particular column in the pandas DataFrame:Parameters: - index (str) – Name of the data frame to zoom in on.
- missing_msg (str) – See
check_object()
. - not_instance_msg (str) – See
is_instance()
. - expand_msg (str) – If specified, this overrides any messages that are prepended by previous SCT chains.
Example: Suppose you want the student to create a DataFrame
my_df
with two columns. The columna
should contain the numbers 1 to 3, while the contents of columnb
can be anything:import pandas as pd my_df = pd.DataFrame({"a": [1, 2, 3], "b": ["a", "n", "y"]})
The following SCT would robustly check that:
Ex().check_df("my_df").multi( check_keys("a").has_equal_value(), check_keys("b") )
check_df()
checks ifmy_df
exists (check_object()
behind the scenes) and is a DataFrame (is_instance()
)check_keys("a")
zooms in on the columna
of the data frame, andhas_equal_value()
checks if the columns correspond between student and solution process.check_keys("b")
zooms in on hte columnb
of the data frame, but there’s no ‘equality checking’ happening
The following submissions would pass the SCT above:
my_df = pd.DataFrame({"a": [1, 1 + 1, 3], "b": ["a", "l", "l"]}) my_df = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]})
-
check_keys
(state, key, missing_msg=None, expand_msg=None)¶ Check whether an object (dict, DataFrame, etc) has a key.
check_keys()
can currently only be used when chained fromcheck_object()
, the function that is used to ‘zoom in’ on the object of interest.Parameters: - key (str) – Name of the key that the object should have.
- missing_msg (str) – When specified, this overrides the automatically generated message in case the key does not exist.
- expand_msg (str) – If specified, this overrides any messages that are prepended by previous SCT chains.
- state (State) – The state that is passed in through the SCT chain (don’t specify this).
Example: Student code and solution code:
x = {'a': 2}
SCT:
# Verify that x contains a key a Ex().check_object('x').check_keys('a') # Verify that x contains a key a and a is correct. Ex().check_object('x').check_keys('a').has_equal_value()
Function calls¶
-
check_function
(state, name, index=0, missing_msg=None, params_not_matched_msg=None, expand_msg=None, signature=True)¶ Check whether a particular function is called.
check_function()
is typically followed by:check_args()
to check whether the arguments were specified. In turn,check_args()
can be followed byhas_equal_value()
orhas_equal_ast()
to assert that the arguments were correctly specified.has_equal_value()
to check whether rerunning the function call coded by the student gives the same result as calling the function call as in the solution.
Checking function calls is a tricky topic. Please visit the dedicated article for more explanation, edge cases and best practices.
Parameters: - name (str) – the name of the function to be tested. When checking functions in packages, always use the ‘full path’ of the function.
- index (int) – index of the function call to be checked. Defaults to 0.
- missing_msg (str) – If specified, this overrides an automatically generated feedback message in case the student did not call the function correctly.
- params_not_matched_msg (str) – If specified, this overrides an automatically generated feedback message in case the function parameters were not successfully matched.
- expand_msg (str) – If specified, this overrides any messages that are prepended by previous SCT chains.
- signature (Signature) – Normally, check_function() can figure out what the function signature is,
but it might be necessary to use
sig_from_params()
to manually build a signature and pass this along. - state (State) – State object that is passed from the SCT Chain (don’t specify this).
Examples: Student code and solution code:
import numpy as np arr = np.array([1, 2, 3, 4, 5]) np.mean(arr)
SCT:
# Verify whether arr was correctly set in np.mean Ex().check_function('numpy.mean').check_args('a').has_equal_value() # Verify whether np.mean(arr) produced the same result Ex().check_function('numpy.mean').has_equal_value()
-
check_args
(state, name, missing_msg=None)¶ Check whether a function argument is specified.
This function can follow
check_function()
in an SCT chain and verifies whether an argument is specified. If you want to go on and check whether the argument was correctly specified, you can can continue chaining withhas_equal_value()
(value-based check) orhas_equal_ast()
(AST-based check)This function can also follow
check_function_def()
orcheck_lambda_function()
to see if arguments have been specified.Parameters: - name (str) – the name of the argument for which you want to check if it is specified. This can also be a number, in which case it refers to the positional arguments. Named arguments take precedence.
- missing_msg (str) – If specified, this overrides the automatically generated feedback message in case the student did specify the argument.
- state (State) – State object that is passed from the SCT Chain (don’t specify this).
Examples: Student and solution code:
import numpy as np arr = np.array([1, 2, 3, 4, 5]) np.mean(arr)
SCT:
# Verify whether arr was correctly set in np.mean # has_equal_value() checks the value of arr, used to set argument a Ex().check_function('numpy.mean').check_args('a').has_equal_value() # Verify whether arr was correctly set in np.mean # has_equal_ast() checks the expression used to set argument a Ex().check_function('numpy.mean').check_args('a').has_equal_ast()
Student and solution code:
def my_power(x): print("calculating sqrt...") return(x * x)
SCT:
Ex().check_function_def('my_power').multi( check_args('x') # will fail if student used y as arg check_args(0) # will still pass if student used y as arg )
Output¶
-
has_output
(state, text, pattern=True, no_output_msg=None)¶ Search student output for a pattern.
Among the student and solution process, the student submission and solution code as a string, the
Ex()
state also contains the output that a student generated with his or her submission.With
has_output()
, you can access this output and match it against a regular or fixed expression.Parameters: - text (str) – the text that is searched for
- pattern (bool) – if True (default), the text is treated as a pattern. If False, it is treated as plain text.
- no_output_msg (str) – feedback message to be displayed if the output is not found.
Example: As an example, suppose we want a student to print out a sentence:
# Print the "This is some ... stuff" print("This is some weird stuff")
The following SCT tests whether the student prints out
This is some weird stuff
:# Using exact string matching Ex().has_output("This is some weird stuff", pattern = False) # Using a regular expression (more robust) # pattern = True is the default msg = "Print out ``This is some ... stuff`` to the output, " + \ "fill in ``...`` with a word you like." Ex().has_output(r"This is some \w* stuff", no_output_msg = msg)
-
has_printout
(state, index, not_printed_msg=None, pre_code=None, name=None, copy=False)¶ Check if the right printouts happened.
has_printout()
will look for the printout in the solution code that you specified withindex
(0 in this case), rerun theprint()
call in the solution process, capture its output, and verify whether the output is present in the output of the student.This is more robust as
Ex().check_function('print')
initiated chains as students can use as many printouts as they want, as long as they do the correct one somewhere.Parameters: - index (int) – index of the
print()
call in the solution whose output you want to search for in the student output. - not_printed_msg (str) – if specified, this overrides the default message that is generated when the output is not found in the student output.
- pre_code (str) – Python code as a string that is executed before running the targeted student call. This is the ideal place to set a random seed, for example.
- copy (bool) – whether to try to deep copy objects in the environment, such as lists, that could accidentally be mutated. Disabled by default, which speeds up SCTs.
- state (State) – state as passed by the SCT chain. Don’t specify this explicitly.
Example: Suppose you want somebody to print out 4:
print(1, 2, 3, 4)
The following SCT would check that:
Ex().has_printout(0)
All of the following SCTs would pass:
print(1, 2, 3, 4) print('1 2 3 4') print(1, 2, '3 4') print("random"); print(1, 2, 3, 4)
Example: Watch out:
has_printout()
will effectively rerun theprint()
call in the solution process after the entire solution script was executed. If your solution script updates the value of x after executing it,has_printout()
will not work.Suppose you have the following solution:
x = 4 print(x) x = 6
The following SCT will not work:
Ex().has_printout(0)
Why? When the
print(x)
call is executed, the value ofx
will be 6, and pythonwhat will look for the output ‘6’ in the output the student generated. In cases like these,has_printout()
cannot be used.Example: Inside a for loop
has_printout()
Suppose you have the following solution:
for i in range(5): print(i)
The following SCT will not work:
Ex().check_for_loop().check_body().has_printout(0)
The reason is that
has_printout()
can only be called from the root state.Ex()
. If you want to check printouts done in e.g. a for loop, you have to use a check_function(‘print’) chain instead:Ex().check_for_loop().check_body().\ set_context(0).check_function('print').\ check_args(0).has_equal_value()
- index (int) – index of the
-
has_no_error
(state, incorrect_msg='Have a look at the console: your code contains an error. Fix it and try again!')¶ Check whether the submission did not generate a runtime error.
If all SCTs for an exercise pass, before marking the submission as correct pythonwhat will automatically check whether the student submission generated an error. This means it is not needed to use
has_no_error()
explicitly.However, in some cases, using
has_no_error()
explicitly somewhere throughout your SCT execution can be helpful:- If you want to make sure people didn’t write typos when writing a long function name.
- If you want to first verify whether a function actually runs, before checking whether the arguments were specified correctly.
- More generally, if, because of the content, it’s instrumental that the script runs without errors before doing any other verifications.
Parameters: incorrect_msg – if specified, this overrides the default message if the student code generated an error.
Example: Suppose you’re verifying an exercise about model training and validation:
# pre exercise code import numpy as np from sklearn.model_selection import train_test_split from sklearn import datasets from sklearn import svm iris = datasets.load_iris() iris.data.shape, iris.target.shape # solution X_train, X_test, y_train, y_test = train_test_split( iris.data, iris.target, test_size=0.4, random_state=0)
If you want to make sure that
train_test_split()
ran without errors, which would check if the student typed the function without typos and used sensical arguments, you could use the following SCT:Ex().has_no_error() Ex().check_function('sklearn.model_selection.train_test_split').multi( check_args(['arrays', 0]).has_equal_value(), check_args(['arrays', 0]).has_equal_value(), check_args(['options', 'test_size']).has_equal_value(), check_args(['options', 'random_state']).has_equal_value() )
If, on the other hand, you want to fall back onto pythonwhat’s built in behavior, that checks for an error before marking the exercise as correct, you can simply leave of the
has_no_error()
step.
Code¶
-
has_code
(state, text, pattern=True, not_typed_msg=None)¶ Test the student code.
Tests if the student typed a (pattern of) text. It is advised to use
has_equal_ast()
instead ofhas_code()
, as it is more robust to small syntactical differences that don’t change the code’s behavior.Parameters: - text (str) – the text that is searched for
- pattern (bool) – if True (the default), the text is treated as a pattern. If False, it is treated as plain text.
- not_typed_msg (str) – feedback message to be displayed if the student did not type the text.
Example: Student code and solution code:
y = 1 + 2 + 3
SCT:
# Verify that student code contains pattern (not robust!!): Ex().has_code(r"1\s*\+2\s*\+3")
-
has_import
(state, name, same_as=False, not_imported_msg='Did you import `{{pkg}}`?', incorrect_as_msg='Did you import `{{pkg}}` as `{{alias}}`?')¶ Checks whether student imported a package or function correctly.
Python features many ways to import packages. All of these different methods revolve around the
import
,from
andas
keywords.has_import()
provides a robust way to check whether a student correctly imported a certain package.By default,
has_import()
allows for different ways of aliasing the imported package or function. If you want to make sure the correct alias was used to refer to the package or function that was imported, setsame_as=True
.Parameters: - name (str) – the name of the package that has to be checked.
- same_as (bool) – if True, the alias of the package or function has to be the same. Defaults to False.
- not_imported_msg (str) – feedback message when the package is not imported.
- incorrect_as_msg (str) – feedback message if the alias is wrong.
Example: Example 1, where aliases don’t matter (defaut):
# solution import matplotlib.pyplot as plt # sct Ex().has_import("matplotlib.pyplot") # passing submissions import matplotlib.pyplot as plt from matplotlib import pyplot as plt import matplotlib.pyplot as pltttt # failing submissions import matplotlib as mpl
Example 2, where the SCT is coded so aliases do matter:
# solution import matplotlib.pyplot as plt # sct Ex().has_import("matplotlib.pyplot", same_as=True) # passing submissions import matplotlib.pyplot as plt from matplotlib import pyplot as plt # failing submissions import matplotlib.pyplot as pltttt
has_equal_x¶
-
has_equal_value
(state, incorrect_msg=None, error_msg=None, undefined_msg=None, append=None, extra_env=None, context_vals=None, pre_code=None, expr_code=None, name=None, copy=True, func=None, override=None, *, test='value')¶ Run targeted student and solution code, and compare returned value.
When called on an SCT chain,
has_equal_value()
will execute the student and solution code that is ‘zoomed in on’ and compare the returned values.Parameters: - incorrect_msg (str) – feedback message if the returned value of the expression in the solution
doesn’t match the one of the student. This feedback message will be expanded if it is used
in the context of another check function, like
check_if_else
. - error_msg (str) – feedback message if there was an error when running the targeted student code. Note that when testing for an error, this message is displayed when none is raised.
- undefined_msg (str) – feedback message if the
name
argument is defined, but a variable with that name doesn’t exist after running the targeted student code. - extra_env (dict) – set variables to the extra environment. They will update the student and solution environment in
the active state before the student/solution code in the active state is ran. This argument should contain a
dictionary with the keys the names of the variables you want to set, and the values are the values of these variables.
You can also use
set_env()
for this. - context_vals (list) – set variables which are bound in a
for
loop to certain values. This argument is only useful when checking a for loop (or list comprehensions). It contains a list with the values of the bound variables. You can also useset_context()
for this. - pre_code (str) – the code in string form that should be executed before the expression is executed. This is the ideal place to set a random seed, for example.
- expr_code (str) – If this argument is set, the expression in the student/solution code will not
be ran. Instead, the given piece of code will be ran in the student as well as the solution environment
and the result will be compared. However if the string contains one or more placeholders
__focus__
, they will be substituted by the currently focused code. - name (str) – If this is specified, the returned value of running this expression after running the focused expression
is returned, instead of the returned value of the focused expression in itself. This is typically used to inspect the
returned value of an object after executing the body of e.g. a
for
loop. - copy (bool) – whether to try to deep copy objects in the environment, such as lists, that could accidentally be mutated. Disable to speed up SCTs. Disabling may lead to cryptic mutation issues.
- func (function) – custom binary function of form f(stu_result, sol_result), for equality testing.
- override – If specified, this avoids the execution of the targeted code in the solution process. Instead, it
will compare the returned value of the expression in the student process with the value specified in
override
. Typically used in aSingleProcessExercise
or if you want to allow for different solutions other than the one coded up in the solution.
Example: Student code and solution code:
import numpy as np arr = np.array([1, 2, 3, 4, 5]) np.mean(arr)
SCT:
# Verify equality of arr: Ex().check_object('arr').has_equal_value() # Verify whether arr was correctly set in np.mean Ex().check_function('numpy.mean').check_args('a').has_equal_value() # Verify whether np.mean(arr) produced the same result Ex().check_function('numpy.mean').has_equal_value()
- incorrect_msg (str) – feedback message if the returned value of the expression in the solution
doesn’t match the one of the student. This feedback message will be expanded if it is used
in the context of another check function, like
-
has_equal_output
(state, incorrect_msg=None, error_msg=None, undefined_msg=None, append=None, extra_env=None, context_vals=None, pre_code=None, expr_code=None, name=None, copy=True, func=None, override=None, *, test='output')¶ Run targeted student and solution code, and compare output.
When called on an SCT chain,
has_equal_output()
will execute the student and solution code that is ‘zoomed in on’ and compare the output.Parameters: - incorrect_msg (str) – feedback message if the output of the expression in the solution
doesn’t match the one of the student. This feedback message will be expanded if it is used
in the context of another check function, like
check_if_else
. - error_msg (str) – feedback message if there was an error when running the targeted student code. Note that when testing for an error, this message is displayed when none is raised.
- undefined_msg (str) – feedback message if the
name
argument is defined, but a variable with that name doesn’t exist after running the targeted student code. - extra_env (dict) – set variables to the extra environment. They will update the student and solution environment in
the active state before the student/solution code in the active state is ran. This argument should contain a
dictionary with the keys the names of the variables you want to set, and the values are the values of these variables.
You can also use
set_env()
for this. - context_vals (list) – set variables which are bound in a
for
loop to certain values. This argument is only useful when checking a for loop (or list comprehensions). It contains a list with the values of the bound variables. You can also useset_context()
for this. - pre_code (str) – the code in string form that should be executed before the expression is executed. This is the ideal place to set a random seed, for example.
- expr_code (str) – If this argument is set, the expression in the student/solution code will not
be ran. Instead, the given piece of code will be ran in the student as well as the solution environment
and the result will be compared. However if the string contains one or more placeholders
__focus__
, they will be substituted by the currently focused code. - name (str) – If this is specified, the output of running this expression after running the focused expression
is returned, instead of the output of the focused expression in itself. This is typically used to inspect the
output of an object after executing the body of e.g. a
for
loop. - copy (bool) – whether to try to deep copy objects in the environment, such as lists, that could accidentally be mutated. Disable to speed up SCTs. Disabling may lead to cryptic mutation issues.
- func (function) – custom binary function of form f(stu_result, sol_result), for equality testing.
- override – If specified, this avoids the execution of the targeted code in the solution process. Instead, it
will compare the output of the expression in the student process with the value specified in
override
. Typically used in aSingleProcessExercise
or if you want to allow for different solutions other than the one coded up in the solution.
- incorrect_msg (str) – feedback message if the output of the expression in the solution
doesn’t match the one of the student. This feedback message will be expanded if it is used
in the context of another check function, like
-
has_equal_error
(state, incorrect_msg=None, error_msg=None, undefined_msg=None, append=None, extra_env=None, context_vals=None, pre_code=None, expr_code=None, name=None, copy=True, func=None, override=None, *, test='error')¶ Run targeted student and solution code, and compare generated errors.
When called on an SCT chain,
has_equal_error()
will execute the student and solution code that is ‘zoomed in on’ and compare the errors that they generate.Parameters: - incorrect_msg (str) – feedback message if the error of the expression in the solution
doesn’t match the one of the student. This feedback message will be expanded if it is used
in the context of another check function, like
check_if_else
. - error_msg (str) – feedback message if there was an error when running the targeted student code. Note that when testing for an error, this message is displayed when none is raised.
- undefined_msg (str) – feedback message if the
name
argument is defined, but a variable with that name doesn’t exist after running the targeted student code. - extra_env (dict) – set variables to the extra environment. They will update the student and solution environment in
the active state before the student/solution code in the active state is ran. This argument should contain a
dictionary with the keys the names of the variables you want to set, and the values are the values of these variables.
You can also use
set_env()
for this. - context_vals (list) – set variables which are bound in a
for
loop to certain values. This argument is only useful when checking a for loop (or list comprehensions). It contains a list with the values of the bound variables. You can also useset_context()
for this. - pre_code (str) – the code in string form that should be executed before the expression is executed. This is the ideal place to set a random seed, for example.
- expr_code (str) – If this argument is set, the expression in the student/solution code will not
be ran. Instead, the given piece of code will be ran in the student as well as the solution environment
and the result will be compared. However if the string contains one or more placeholders
__focus__
, they will be substituted by the currently focused code. - name (str) – If this is specified, the error of running this expression after running the focused expression
is returned, instead of the error of the focused expression in itself. This is typically used to inspect the
error of an object after executing the body of e.g. a
for
loop. - copy (bool) – whether to try to deep copy objects in the environment, such as lists, that could accidentally be mutated. Disable to speed up SCTs. Disabling may lead to cryptic mutation issues.
- func (function) – custom binary function of form f(stu_result, sol_result), for equality testing.
- override – If specified, this avoids the execution of the targeted code in the solution process. Instead, it
will compare the error of the expression in the student process with the value specified in
override
. Typically used in aSingleProcessExercise
or if you want to allow for different solutions other than the one coded up in the solution.
- incorrect_msg (str) – feedback message if the error of the expression in the solution
doesn’t match the one of the student. This feedback message will be expanded if it is used
in the context of another check function, like
-
has_equal_ast
(state, incorrect_msg=None, code=None, exact=True, append=None)¶ Test whether abstract syntax trees match between the student and solution code.
has_equal_ast()
can be used in two ways:- As a robust version of
has_code()
. By settingcode
, you can look for the AST representation ofcode
in the student’s submission. But be aware thata
anda = 1
won’t match, as reading and assigning are not the same in an AST. Useast.dump(ast.parse(code))
to see an AST representation ofcode
. - As an expression-based check when using more advanced SCT chain, e.g. to compare the equality of expressions to set function arguments.
Parameters: - incorrect_msg – message displayed when ASTs mismatch. When you specify
code
yourself, you have to specify this. - code – optional code to use instead of the solution AST.
- exact – whether the representations must match exactly. If false, the solution AST
only needs to be contained within the student AST (similar to using test student typed).
Defaults to
True
, unless thecode
argument has been specified.
Example: Student and Solution Code:
dict(a = 'value').keys()
SCT:
# all pass Ex().has_equal_ast() Ex().has_equal_ast(code = "dict(a = 'value').keys()") Ex().has_equal_ast(code = "dict(a = 'value')", exact = False)
Student and Solution Code:
import numpy as np arr = np.array([1, 2, 3, 4, 5]) np.mean(arr)
SCT:
# Check underlying value of arugment a of np.mean: Ex().check_function('numpy.mean').check_args('a').has_equal_ast() # Only check AST equality of expression used to specify argument a: Ex().check_function('numpy.mean').check_args('a').has_equal_ast()
- As a robust version of
Combining SCTs¶
-
multi
(state, *tests)¶ Run multiple subtests. Return original state (for chaining).
This function could be thought as an AND statement, since all tests it runs must pass
Parameters: - state – State instance describing student and solution code, can be omitted if used with Ex()
- tests – one or more sub-SCTs to run.
Example: The SCT below checks two has_code cases..
Ex().multi(has_code('SELECT'), has_code('WHERE'))
The SCT below uses
multi
to ‘branch out’ to check that the SELECT statement has both a WHERE and LIMIT clause..Ex().check_node('SelectStmt', 0).multi( check_edge('where_clause'), check_edge('limit_clause') )
Example: Suppose we want to verify the following function call:
round(1.2345, ndigits=2)
The following SCT would verify this, using
multi
to ‘branch out’ the state to two sub-SCTs:Ex().check_function('round').multi( check_args(0).has_equal_value(), check_args('ndigits').has_equal_value() )
-
check_correct
(state, check, diagnose)¶ Allows feedback from a diagnostic SCT, only if a check SCT fails.
Parameters: - state – State instance describing student and solution code. Can be omitted if used with Ex().
- check – An sct chain that must succeed.
- diagnose – An sct chain to run if the check fails.
Example: The SCT below tests whether students query result is correct, before running diagnostic SCTs..
Ex().check_correct( check_result(), check_node('SelectStmt') )
Example: The SCT below tests whether an object is correct. Only if the object is not correct, will the function calling checks be executed
Ex().check_correct( check_object('x').has_equal_value(), check_function('round').check_args(0).has_equal_value() )
-
check_or
(state, *tests)¶ Test whether at least one SCT passes.
Parameters: - state – State instance describing student and solution code, can be omitted if used with Ex()
- tests – one or more sub-SCTs to run
Example: The SCT below tests that the student typed either ‘SELECT’ or ‘WHERE’ (or both)..
Ex().check_or( has_code('SELECT'), has_code('WHERE') )
The SCT below checks that a SELECT statement has at least a WHERE c or LIMIT clause..
Ex().check_node('SelectStmt', 0).check_or( check_edge('where_clause'), check_edge('limit_clause') )
Example: The SCT below tests that the student typed either ‘mean’ or ‘median’:
Ex().check_or( has_code('mean'), has_code('median') )
If the student didn’t type either, the feedback message generated by
has_code(mean)
, the first SCT, will be presented to the student.
-
check_not
(state, *tests, msg)¶ Run multiple subtests that should fail. If all subtests fail, returns original state (for chaining)
- This function is currently only tested in working with
has_code()
in the subtests. - This function can be thought as a
NOT(x OR y OR ...)
statement, since all tests it runs must fail - This function can be considered a direct counterpart of multi.
Parameters: - state – State instance describing student and solution code, can be omitted if used with Ex()
- *tests – one or more sub-SCTs to run
- msg – feedback message that is shown in case not all tests specified in
*tests
fail.
Example: Thh SCT below runs two has_code cases..
Ex().check_not( has_code('INNER'), has_code('OUTER'), incorrect_msg="Don't use `INNER` or `OUTER`!" )
If students use
INNER (JOIN)
orOUTER (JOIN)
in their code, this test will fail.Example: The SCT fails with feedback for a specific incorrect value, defined using an override:
Ex().check_object('result').multi( check_not( has_equal_value(override=100), msg='100 is incorrect for reason xyz.' ), has_equal_value() )
Notice that
check_not
comes before thehas_equal_value
test that checks if the student value is equal to the solution value.Example: The SCT below runs two
has_code
cases:Ex().check_not( has_code('mean'), has_code('median'), msg='Check your code' )
If students use
mean
ormedian
anywhere in their code, this SCT will fail.Note
- This function is not yet tested with all checks, please report unexpected behaviour.
- This function can be thought as a NOT(x OR y OR …) statement, since all tests it runs must fail
- This function can be considered a direct counterpart of multi.
- This function is currently only tested in working with
Function/Class/Lambda definitions¶
-
check_function_def
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether a function was defined and zoom in on it.
Can be chained with
check_call()
,check_args()
andcheck_body()
.Parameters: - index – the name of the function definition.
- typestr – If specified, this overrides the standard way of referring to the construct you’re zooming in on.
- missing_msg – If specified, this overrides the automatically generated feedback message in case the construct could not be found.
- expand_msg – If specified, this overrides the automatically generated feedback message that is prepended to feedback messages that are thrown further in the SCT chain.
Example: Suppose you want a student to create a function
shout_echo()
:def shout_echo(word1, echo=1): echo_word = word1 * echo shout_words = echo_word + '!!!' return shout_words
The following SCT robustly checks this:
Ex().check_function_def('shout_echo').check_correct( multi( check_call("f('hey', 3)").has_equal_value(), check_call("f('hi', 2)").has_equal_value(), check_call("f('hi')").has_equal_value() ), check_body().set_context('test', 1).multi( has_equal_value(name = 'echo_word'), has_equal_value(name = 'shout_words') ) )
Here:
check_function_def()
zooms in on the function definition ofshout_echo
in both student and solution code (and process).check_correct()
is used to- First check whether the function gives the correct result when called in different ways (through
check_call()
). - Only if these ‘function unit tests’ don’t pass,
check_correct()
will run the check_body() chain that dives deeper into the function definition body. This chain sets the context variables -word1
andecho
, the arguments of the function - to the values'test'
and1
respectively, again while being agnostic to the actual name of these context variables.
- First check whether the function gives the correct result when called in different ways (through
Notice how
check_correct()
is used to great effect here: why check the function definition internals if the I/O of the function works fine? Because of this construct, all the following submissions will pass the SCT:# passing submission 1 def shout_echo(w, e=1): ew = w * e return ew + '!!!' # passing submission 2 def shout_echo(a, b=1): return a * b + '!!!'
Example: check_args()
is most commonly used in combination withcheck_function()
to verify the arguments of function calls, but it can also be used to verify the arguments specified in the signature of a function definition.We can extend the SCT for the previous example to explicitly verify the signature:
msg1 = "Make sure to specify 2 arguments!" msg2 = "don't specify default arg!" msg3 = "specify a default arg!" Ex().check_function_def('shout_echo').check_correct( multi( check_call("f('hey', 3)").has_equal_value(), check_call("f('hi', 2)").has_equal_value(), check_call("f('hi')").has_equal_value() ), multi( has_equal_part_len("args", unequal_msg=1), check_args(0).has_equal_part('is_default', msg=msg2), check_args('word1').has_equal_part('is_default', msg=msg2), check_args(1).\ has_equal_part('is_default', msg=msg3).has_equal_value(), check_args('echo').\ has_equal_part('is_default', msg=msg3).has_equal_value(), check_body().set_context('test', 1).multi( has_equal_value(name = 'echo_word'), has_equal_value(name = 'shout_words') ) ) )
has_equal_part_len("args")
verifies whether student and solution function definition have the same number of arguments.check_args(0)
refers to the first argument in the signature by position, and the chain checks whether the student did not specify a default as in the solution.- An alternative for the
check_args(0)
chain is to usecheck_args('word1')
to refer to the first argument. This is more restrictive, as the requires the student to use the exact same name. check_args(1)
refers to the second argument in the signature by position, and the chain checks whether the student specified a default, as in the solution, and whether the value of this default corresponds to the one in the solution.- The
check_args('echo')
chain is a more restrictive alternative for thecheck_args(1)
chain.
Notice that support for verifying arguments is not great yet:
- A lot of work is needed to verify the number of arguments and whether or not defaults are set.
- You have to specify custom messages because pythonwhat doesn’t automatically generate messages.
We are working on it!
-
has_equal_part_len
(state, name, unequal_msg)¶ Verify that a part that is zoomed in on has equal length.
Typically used in the context of
check_function_def()
Parameters: - name (str) – name of the part for which to check the length to the corresponding part in the solution.
- unequal_msg (str) – Message in case the lengths do not match.
- state (State) – state as passed by the SCT chain. Don’t specify this explicitly.
Examples: Student and solution code:
def shout(word): return word + '!!!'
SCT that checks number of arguments:
Ex().check_function_def('shout').has_equal_part_len('args', 'not enough args!')
-
check_call
(state, callstr, argstr=None, expand_msg=None)¶ When checking a function definition of lambda function, prepare has_equal_x for checking the call of a user-defined function.
Parameters: - callstr (str) – call string that specifies how the function should be called, e.g. f(1, a = 2).
check_call()
will replacef
with the function/lambda you’re targeting. - argstr (str) – If specified, this overrides the way the function call is refered to in the expand message.
- expand_msg (str) – If specified, this overrides any messages that are prepended by previous SCT chains.
- state (State) – state object that is chained from.
Example: Student and solution code:
def my_power(x): print("calculating sqrt...") return(x * x)
SCT:
Ex().check_function_def('my_power').multi( check_call("f(3)").has_equal_value() check_call("f(3)").has_equal_output() )
- callstr (str) – call string that specifies how the function should be called, e.g. f(1, a = 2).
-
check_class_def
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether a class was defined and zoom in on its definition
Can be chained with
check_bases()
andcheck_body()
.Parameters: - index – the name of the function definition.
- typestr – If specified, this overrides the standard way of referring to the construct you’re zooming in on.
- missing_msg – If specified, this overrides the automatically generated feedback message in case the construct could not be found.
- expand_msg – If specified, this overrides the automatically generated feedback message that is prepended to feedback messages that are thrown further in the SCT chain.
Example: Suppose you want to check whether a class was defined correctly:
class MyInt(int): def __init__(self, i): super().__init__(i + 1)
The following SCT would verify this:
Ex().check_class_def('MyInt').multi( check_bases(0).has_equal_ast(), check_body().check_function_def('__init__').multi( check_args('self'), check_args('i'), check_body().set_context(i = 2).multi( check_function('super', signature=False), check_function('super.__init__').check_args(0).has_equal_value() ) ) )
check_class_def()
looks for the class definition itself.- With
check_bases()
, you can zoom in on the different basse classes that the class definition inherits from. - With
check_body()
, you zoom in on the class body, after which you can use other functions such ascheck_function_def()
to look for class methods. - Of course, just like for other examples, you can use
check_correct()
where necessary, e.g. to verify whether class methods give the right behavior withcheck_call()
before diving into the body of the method itself.
-
check_lambda_function
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether a lambda function was coded zoom in on it.
Can be chained with
check_call()
,check_args()
andcheck_body()
.Parameters: - index – the index of the lambda function (0-based).
- typestr – If specified, this overrides the standard way of referring to the construct you’re zooming in on.
- missing_msg – If specified, this overrides the automatically generated feedback message in case the construct could not be found.
- expand_msg – If specified, this overrides the automatically generated feedback message that is prepended to feedback messages that are thrown further in the SCT chain.
Example: Suppose you want a student to create a lambda function that returns the length of an array times two:
lambda x: len(x)*2
The following SCT robustly checks this:
Ex().check_lambda_function().check_correct( multi( check_call("f([1])").has_equal_value(), check_call("f([1, 2])").has_equal_value() ), check_body().set_context([1, 2, 3]).has_equal_value() )
Here:
check_lambda_function()
zooms in on the first lambda function in both student and solution code.check_correct()
is used to- First check whether the lambda function gives the correct result when called in different ways (through
check_call()
). - Only if these ‘function unit tests’ don’t pass,
check_correct()
will run the check_body() chain that dives deeper into the lambda function’s body. This chain sets the context variable x, the argument of the function, to the values[1, 2, 3]
, while being agnostic to the actual name the student used for this context variable.
- First check whether the lambda function gives the correct result when called in different ways (through
Notice how
check_correct()
is used to great effect here: why check the function definition internals if the I/O of the function works fine? Because of this construct, all the following submissions will pass the SCT:# passing submission 1 lambda x: len(x) + len(x) # passing submission 2 lambda y, times=2: len(y) * times
Control flow¶
-
check_if_else
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether an if statement was coded zoom in on it.
Parameters: - index – the index of the if statement to look for (0 based)
- typestr – If specified, this overrides the standard way of referring to the construct you’re zooming in on.
- missing_msg – If specified, this overrides the automatically generated feedback message in case the construct could not be found.
- expand_msg – If specified, this overrides the automatically generated feedback message that is prepended to feedback messages that are thrown further in the SCT chain.
Example: Suppose you want students to print out a message if
x
is larger than 0:x = 4 if x > 0: print("x is strictly positive")
The following SCT would verify that:
Ex().check_if_else().multi( check_test().multi( set_env(x = -1).has_equal_value(), set_env(x = 1).has_equal_value(), set_env(x = 0).has_equal_value() ), check_body().check_function('print', 0).\ check_args('value').has_equal_value() )
check_if_else()
zooms in on the first if statement in the student and solution submission.check_test()
zooms in on the ‘test’ portion of the if statement,x > 0
in case of the solution.has_equal_value()
reruns this expression and the corresponding expression in the student code for different values ofx
(set withset_env()
) and compare there results. This way, you can robustly verify whether the if test was coded up correctly. If the student codes up the condition as0 < x
, this would also be accepted.check_body()
zooms in on the ‘body’ portion of the if statement,print("...")
in case of the solution. With a classicalcheck_function()
chain, it is verified whether the if statement contains a functionprint()
and whether its argument is set correctly.
Example: In Python, when an if-else statement has an
elif
clause, it is held in the orelse part. In this sense, an if-elif-else statement is represented by python as nested if-elses. More specifically, this if-else statement:if x > 0: print(x) elif y > 0: print(y) else: print('none')
Is syntactically equivalent to:
if x > 0: print(x) else: if y > 0: print(y) else: print('none')
The second representation has to be followed when writing the corresponding SCT:
Ex().check_if_else().multi( check_test(), # zoom in on x > 0 check_body(), # zoom in on print(x) check_orelse().check_if_else().multi( check_test(), # zoom in on y > 0 check_body(), # zoom in on print(y) check_orelse() # zoom in on print('none') ) )
-
check_try_except
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether a try except statement was coded zoom in on it.
Can be chained with
check_body()
,check_handlers()
,check_orelse()
andcheck_finalbody()
.Parameters: - index – the index of the try except statement (0-based).
- typestr – If specified, this overrides the standard way of referring to the construct you’re zooming in on.
- missing_msg – If specified, this overrides the automatically generated feedback message in case the construct could not be found.
- expand_msg – If specified, this overrides the automatically generated feedback message that is prepended to feedback messages that are thrown further in the SCT chain.
Example: Suppose you want to verify whether the student did a try-except statement properly:
do_dangerous_thing = lambda n: n try: x = do_dangerous_thing(n = 4) except ValueError as e: x = 'something wrong with inputs' except: x = 'something went wrong' finally: print('ciao!')
The following SCT can be used to verify this:
Ex().check_try_except().multi( check_body().\ check_function('do_dangerous_thing').\ check_args('n').has_equal_value(), check_handlers('ValueError').\ has_equal_value(name = 'x'), check_handlers('all').\ has_equal_value(name = 'x'), check_finalbody().\ check_function('print').check_args(0).has_equal_value() )
-
check_if_exp
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether an if expression was coded zoom in on it.
This function works the exact same way as
check_if_else()
.
-
check_with
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether a with statement was coded zoom in on it.
Parameters: - index – the index of the``with`` statement to verify (0-based)
- typestr – If specified, this overrides the standard way of referring to the construct you’re zooming in on.
- missing_msg – If specified, this overrides the automatically generated feedback message in case the construct could not be found.
- expand_msg – If specified, this overrides the automatically generated feedback message that is prepended to feedback messages that are thrown further in the SCT chain.
Loops¶
-
check_for_loop
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether a for loop was coded and zoom in on it.
Can be chained with
check_iter()
andcheck_body()
.Parameters: - index – Index of the for loop (0-based).
- typestr – If specified, this overrides the standard way of referring to the construct you’re zooming in on.
- missing_msg – If specified, this overrides the automatically generated feedback message in case the construct could not be found.
- expand_msg – If specified, this overrides the automatically generated feedback message that is prepended to feedback messages that are thrown further in the SCT chain.
Example: Suppose you want a student to iterate over a predefined dictionary
my_dict
and do the appropriate printouts:for key, value in my_dict.items(): print(key + " - " + str(value))
The following SCT would verify this:
Ex().check_for_loop().multi( check_iter().has_equal_value(), check_body().multi( set_context('a', 1).has_equal_output(), set_context('b', 2).has_equal_output() ) )
check_for_loop()
zooms in on thefor
loop, and makes its parts available for further checking.check_iter()
zooms in on the iterator part of the for loop,my_dict.items()
in the solution.has_equal_value()
re-executes the expressions specified by student and solution and compares their results.check_body()
zooms in on the body part of the for loop,print(key + " - " + str(value))
. For different values ofkey
andvalue
, the student’s body and solution’s body are executed again and the printouts are captured and compared to see if they are equal.
Notice how you do not need to specify the variables by name in
set_context()
. pythonwhat can figure out the variable names used in both student and solution code, and can do the verification independent of that. That way, we can make the SCT robust against submissions that code the correct logic, but use different names for the context values. In other words, the following student submissions that would also pass the SCT:# passing submission 1 my_dict = {'a': 1, 'b': 2} for k, v in my_dict.items(): print(k + " - " + str(v)) # passing submission 2 my_dict = {'a': 1, 'b': 2} for first, second in my_dict.items(): mess = first + " - " + str(second) print(mess)
Example: As another example, suppose you want the student to build a list of doubles as follows:
even = [] for i in range(10): even.append(2*i)
The following SCT would robustly verify this:
Ex().check_correct( check_object('even').has_equal_value(), check_for_loop().multi( check_iter().has_equal_value(), check_body().set_context(2).set_env(even = []).\ has_equal_value(name = 'even') ) )
check_correct()
makes sure that we do not dive into thefor
loop if the arrayeven
is correctly populated in the end.- If
even
was not correctly populated,check_for_loop()
will zoom in on the for loop. - The
check_iter()
chain verifies whether range(10) (or something equivalent) was used to iterate over. check_body()
zooms in on the body, and reruns the body (even.append(2*i)
in the solution) fori
equal to 2, and even temporarily set to an empty array. Notice how we useset_context()
to robustly set the context value (the student can use a different variable name), while we have to explicitly seteven
withset_env()
. Also notice how we usehas_equal_value(name = 'even')
instead of the usualcheck_object()
;check_object()
can only be called from the root stateEx()
.
Example: As a follow-up example, suppose you want the student to build a list of doubles of the even numbers only:
even = [] for i in range(10): if i % 2 == 0: even.append(2*i)
The following SCT would robustly verify this:
Ex().check_correct( check_object('even').has_equal_value(), check_for_loop().multi( check_iter().has_equal_value(), check_body().check_if_else().multi( check_test().multi( set_context(1).has_equal_value(), set_context(2).has_equal_value() ), check_body().set_context(2).\ set_env(even = []).has_equal_value(name = 'even') ) ) )
-
check_while
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether a while loop was coded and zoom in on it.
Can be chained with
check_test()
,check_body()
andcheck_orelse()
.Parameters: - index – the index of the while loop to verify (0-based).
- typestr – If specified, this overrides the standard way of referring to the construct you’re zooming in on.
- missing_msg – If specified, this overrides the automatically generated feedback message in case the construct could not be found.
- expand_msg – If specified, this overrides the automatically generated feedback message that is prepended to feedback messages that are thrown further in the SCT chain.
Example: Suppose you want a student to code a while loop that counts down a counter from 50 until a multilpe of 11 is found. If it is found, the value should be printed out.
i = 50 while i % 11 != 0: i -= 1
The following SCT robustly verifies this:
Ex().check_correct( check_object('i').has_equal_value(), check_while().multi( check_test().multi( set_env(i = 45).has_equal_value(), set_env(i = 44).has_equal_value() ), check_body().set_env(i = 3).has_equal_value(name = 'i') ) )
check_correct()
first checks whether the end result ofi
is correct. If it is, the entire chain that checks thewhile
loop is skipped.- If
i
is not correctly calculated,check_while_loop()
zooms in on the while loop. check_test()
zooms in on the condition of thewhile
loop,i % 11 != 0
in the solution, and verifies whether the expression gives the same results for different values ofi
, set throughset_env()
, when comparing student and solution.check_body()
zooms in on the body of thewhile
loop, andhas_equal_value()
checks whether rerunning this body updatesi
as expected wheni
is temporarily set to 3 withset_env()
.
-
check_list_comp
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether a list comprehension was coded and zoom in on it.
Can be chained with
check_iter()
,check_body()
, andcheck_ifs()
.Parameters: - index – Index of the list comprehension (0-based)
- typestr – If specified, this overrides the standard way of referring to the construct you’re zooming in on.
- missing_msg – If specified, this overrides the automatically generated feedback message in case the construct could not be found.
- expand_msg – If specified, this overrides the automatically generated feedback message that is prepended to feedback messages that are thrown further in the SCT chain.
Example: Suppose you expect students to create a list
my_list
as follows:my_list = [ i*2 for i in range(0,10) if i>2 ]
The following SCT would robustly verify this:
Ex().check_correct( check_object('my_list').has_equal_value(), check_list_comp().multi( check_iter().has_equal_value(), check_body().set_context(4).has_equal_value(), check_ifs(0).multi( set_context(0).has_equal_value(), set_context(3).has_equal_value(), set_context(5).has_equal_value() ) ) )
- With
check_correct()
, we’re making sure that the list comprehension checking is not executed ifmy_list
was calculated properly. - If
my_list
is not correct, the ‘diagnose’ chain will run:check_list_comp()
looks for the first list comprehension in the student’s submission. - Next,
check_iter()
zooms in on the iterator,range(0, 10)
in the case of the solution.has_equal_value()
verifies whether the expression that the student used evaluates to the same value as the expression that the solution used. check_body()
zooms in on the body,i*2
in the case of the solution.set_context()
sets the iterator to 4, allowing for the fact that the student used another name instead ofi
for this iterator.has_equal_value()
reruns the body in the student and solution code with the iterator set to 4, and checks if the results are the same.check_ifs(0)
zooms in on the firstif
of the list comprehension,i>2
in case of the solution. With a series ofset_context()
andhas_equal_value()
, it is verifies whether this condition evaluates to the same value in student and solution code for different values of the iterator (i in the case of the solution, whatever in the case of the student).
-
check_dict_comp
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether a dictionary comprehension was coded and zoom in on it.
Can be chained with
check_key()
,check_value()
, andcheck_ifs()
.Parameters: - index – Index of the dictionary comprehension (0-based)
- typestr – If specified, this overrides the standard way of referring to the construct you’re zooming in on.
- missing_msg – If specified, this overrides the automatically generated feedback message in case the construct could not be found.
- expand_msg – If specified, this overrides the automatically generated feedback message that is prepended to feedback messages that are thrown further in the SCT chain.
Example: Suppose you expect students to create a dictionary
my_dict
as follows:my_dict = { m:len(m) for m in ['a', 'ab', 'abc'] }
The following SCT would robustly verify this:
Ex().check_correct( check_object('my_dict').has_equal_value(), check_dict_comp().multi( check_iter().has_equal_value(), check_key().set_context('ab').has_equal_value(), check_value().set_context('ab').has_equal_value() ) )
- With
check_correct()
, we’re making sure that the dictionary comprehension checking is not executed ifmy_dict
was created properly. - If
my_dict
is not correct, the ‘diagnose’ chain will run:check_dict_comp()
looks for the first dictionary comprehension in the student’s submission. - Next,
check_iter()
zooms in on the iterator,['a', 'ab', 'abc']
in the case of the solution.has_equal_value()
verifies whether the expression that the student used evaluates to the same value as the expression that the solution used. check_key()
zooms in on the key of the comprehension,m
in the case of the solution.set_context()
temporaritly sets the iterator to'ab'
, allowing for the fact that the student used another name instead ofm
for this iterator.has_equal_value()
reruns the key expression in the student and solution code with the iterator set to'ab'
, and checks if the results are the same.check_value()
zooms in on the value of the comprehension,len(m)
in the case of the solution.has_equal_value()
reruns the value expression in the student and solution code with the iterator set to'ab'
, and checks if the results are the same.
-
check_generator_exp
(state, index=0, typestr='{{ordinal}} node', missing_msg=None, expand_msg=None)¶ Check whether a generator expression was coded and zoom in on it.
Can be chained with
check_iter()
,check_body()
, andcheck_ifs()
.Parameters: - index – Index of the generator expression (0-based)
- typestr – If specified, this overrides the standard way of referring to the construct you’re zooming in on.
- missing_msg – If specified, this overrides the automatically generated feedback message in case the construct could not be found.
- expand_msg – If specified, this overrides the automatically generated feedback message that is prepended to feedback messages that are thrown further in the SCT chain.
Example: Suppose you expect students to create a generator
my_gen
as follows:my_gen = ( i*2 for i in range(0,10) )
The following SCT would robustly verify this:
Ex().check_correct( check_object('my_gen').has_equal_value(), check_generator_exp().multi( check_iter().has_equal_value(), check_body().set_context(4).has_equal_value() ) )
Have a look at
check_list_comp
to understand what’s going on; it is very similar.
State management¶
-
override
(state, solution)¶ Override the solution code with something arbitrary.
There might be cases in which you want to temporarily override the solution code so you can allow for alternative ways of solving an exercise. When you use
override()
in an SCT chain, the remainder of that SCT chain will run as if the solution code you specified is the only code that was in the solution.Check the glossary for an example (pandas plotting)
Parameters: - solution – solution code as a string that overrides the original solution code.
- state – State instance describing student and solution code. Can be omitted if used with Ex().
-
disable_highlighting
(state)¶ Disable highlighting in the remainder of the SCT chain.
Include this function if you want to avoid that pythonwhat marks which part of the student submission is incorrect.
Examples: SCT that will mark the ‘number’ portion if it is incorrect:
Ex().check_function('round').check_args(0).has_equal_ast()
SCT chains that will not mark certain mistakes. The earlier you put the function, the more types of mistakes will no longer be highlighted:
Ex().disable_highlighting().check_function('round').check_args(0).has_equal_ast() Ex().check_function('round').disable_highlighting().check_args(0).has_equal_ast() Ex().check_function('round').check_args(0).disable_highlighting().has_equal_ast()
-
set_context
(state, *args, **kwargs)¶ Update context values for student and solution environments.
When
has_equal_x()
is used after this, the context values (infor
loops and function definitions, for example) will have the values specified through his function. It is the function equivalent of thecontext_vals
argument of thehas_equal_x()
functions.- Note 1: excess args and unmatched kwargs will be unused in the student environment.
- Note 2: When you try to set context values that don’t match any target variables in the solution code,
set_context()
raises an exception that lists the ones available. - Note 3: positional arguments are more robust to the student using different names for context values.
- Note 4: You have to specify arguments either by position, either by name. A combination is not possible.
Example: Solution code:
total = 0 for i in range(10): print(i ** 2)
Student submission that will pass (different iterator, different calculation):
total = 0 for j in range(10): print(j * j)
SCT:
# set_context is robust against different names of context values. Ex().check_for_loop().check_body().multi( set_context(1).has_equal_output(), set_context(2).has_equal_output(), set_context(3).has_equal_output() ) # equivalent SCT, by setting context_vals in has_equal_output() Ex().check_for_loop().check_body().\ multi([s.has_equal_output(context_vals=[i]) for i in range(1, 4)])
-
set_env
(state, **kwargs)¶ Update/set environemnt variables for student and solution environments.
When
has_equal_x()
is used after this, the variables specified through this function will be available in the student and solution process. Note that you will not see these variables in the student process of the state produced by this function: the values are saved on the state and are only added to the student and solution processes whenhas_equal_ast()
is called.Example: Student and Solution Code:
a = 1 if a > 4: print('pretty large')
SCT:
# check if condition works with different values of a Ex().check_if_else().check_test().multi( set_env(a = 3).has_equal_value(), set_env(a = 4).has_equal_value(), set_env(a = 5).has_equal_value() ) # equivalent SCT, by setting extra_env in has_equal_value() Ex().check_if_else().check_test().\ multi([has_equal_value(extra_env={'a': i}) for i in range(3, 6)])
Checking files¶
-
check_file
(state: protowhat.State.State, path, missing_msg='Did you create the file `{}`?', is_dir_msg='Want to check the file `{}`, but found a directory.', parse=True, solution_code=None)¶ Test whether file exists, and make its contents the student code.
Parameters: - state – State instance describing student and solution code. Can be omitted if used with Ex().
- path – expected location of the file
- missing_msg – feedback message if no file is found in the expected location
- is_dir_msg – feedback message if the path is a directory instead of a file
- parse – If
True
(the default) the content of the file is interpreted as code in the main exercise technology. This enables more checks on the content of the file. - solution_code – this argument can be used to pass the expected code for the file so it can be used by subsequent checks.
Note
This SCT fails if the file is a directory.
Example: To check if a user created the file
my_output.txt
in the subdirectoryresources
of the directory where the exercise is run, use this SCT:Ex().check_file("resources/my_output.txt", parse=False)
-
has_dir
(state: protowhat.State.State, path, msg='Did you create a directory `{}`?')¶ Test whether a directory exists.
Parameters: - state – State instance describing student and solution code. Can be omitted if used with Ex().
- path – expected location of the directory
- msg – feedback message if no directory is found in the expected location
Example: To check if a user created the subdirectory
resources
in the directory where the exercise is run, use this SCT:Ex().has_dir("resources")
-
run
(state, relative_working_dir=None, solution_dir='../solution', run_solution=True)¶ Run the focused student and solution code in the specified location
This function can be used after
check_file
to execute student and solution code. The arguments allow configuring the correct context for execution.SCT functions chained after this one that execute pieces of code (custom expressions or the focused part of a file) execute in the same student and solution locations as the file.
Note
This function does not execute the file itself, but code in memory. This can have an impact when:
- the solution code imports from a different file in the expected solution (code that is not installed)
- using functionality depending on e.g.
__file__
andinspect
When the expected code has imports from a different file that is part of the exercise, it can only work if the solution code provided earlier does not have these imports but instead has all that functionality inlined.
Parameters: - relative_working_dir (str) – if specified, this relative path is the subdirectory inside the student and solution context in which the code is executed
- solution_dir (str) – a relative path,
solution
by default, that sets the root of the solution context, relative to that of the student execution context - state (State) – state as passed by the SCT chain. Don’t specify this explicitly.
If
relative_working_dir
is not set, it will be the directory the file was loaded from bycheck_file
and fall back to the root of the student execution context (the working directory pythonwhat runs in).The
solution_dir
helps to prevent solution side effects from conflicting with those of the student. If the set or derived value ofrelative_working_dir
is an absolute path,relative_working_dir
will not be used to form the solution execution working directory: the solution code will be executed in the root of the solution execution context.Example: Suppose the student and solution have a file
script.py
in/home/repl/
:if True: a = 1 print("Hi!")
We can check it with this SCT (with
file_content
containing the expected file content):Ex().check_file( "script.py", solution_code=file_content ).run().multi( check_object("a").has_equal_value(), has_printout(0) )
Bash history checks¶
-
get_bash_history
(full_history=False, bash_history_path=None)¶ Get the commands in the bash history
Parameters: - full_history (bool) – if true, returns all commands in the bash history, else only return the commands executed after the last bash history info update
- bash_history_path (str | Path) – path to the bash history file
Returns: a list of commands (empty if the file is not found)
Import from
from protowhat.checks import get_bash_history
.
-
has_command
(state, pattern, msg, fixed=False, commands=None)¶ Test whether the bash history has a command matching the pattern
Parameters: - state – State instance describing student and solution code. Can be omitted if used with Ex().
- pattern – text that command must contain (can be a regex pattern or a simple string)
- msg – feedback message if no matching command is found
- fixed – whether to match text exactly, rather than using regular expressions
- commands – the bash history commands to check against.
By default this will be all commands since the last bash history info update.
Otherwise pass a list of commands to search through, created by calling the helper function
get_bash_history()
.
Note
The helper function
update_bash_history_info(bash_history_path=None)
needs to be called in the pre-exercise code in exercise types that don’t have built-in support for bash history features.Note
If the bash history info is updated every time code is submitted (by using
update_bash_history_info()
in the pre-exercise code), it’s advised to only use this function as the second part of acheck_correct()
to help students debug the command they haven’t correctly run yet. Look at the examples to see what could go wrong.If bash history info is only updated at the start of an exercise, this can be used everywhere as the (cumulative) commands from all submissions are known.
Example: The goal of an exercise is to use
man
.If the exercise doesn’t have built-in support for bash history SCTs, update the bash history info in the pre-exercise code:
update_bash_history_info()
In the SCT, check whether a command with
man
was used:Ex().has_command("$man\s", "Your command should start with ``man ...``.")
Example: The goal of an exercise is to use
touch
to create two files.In the pre-exercise code, put:
update_bash_history_info()
This SCT can cause problems:
Ex().has_command("touch.*file1", "Use `touch` to create `file1`") Ex().has_command("touch.*file2", "Use `touch` to create `file2`")
If a student submits after running
touch file0 && touch file1
in the console, they will get feedback to createfile2
. If they submit again after runningtouch file2
in the console, they will get feedback to createfile1
, since the SCT only has access to commands after the last bash history info update (only the second command in this case). Only if they execute all required commands in a single submission the SCT will pass.A better SCT in this situation checks the outcome first and checks the command to help the student achieve it:
Ex().check_correct( check_file('file1', parse=False), has_command("touch.*file1", "Use `touch` to create `file1`") ) Ex().check_correct( check_file('file2', parse=False), has_command("touch.*file2", "Use `touch` to create `file2`") )
-
prepare_validation
(state: protowhat.State.State, commands: List[str], bash_history_path: Optional[str] = None) → protowhat.State.State¶ Let the exercise validation know what shell commands are required to complete the exercise
Import using
from protowhat.checks import prepare_validation
.Parameters: - state – State instance describing student and solution code. Can be omitted if used with Ex().
- commands – List of strings that a student is expected to execute
- bash_history_path (str | Path) – path to the bash history file
Example: The goal of an exercise is to run a build and check the output.
At the start of the SCT, put:
Ex().prepare_validation(["make", "cd build", "ls"])
Further down you can now use
has_command
.
-
update_bash_history_info
(bash_history_path=None)¶ Store the current number of commands in the bash history
get_bash_history
can use this info later to get only newer commands.Depending on the wanted behaviour this function should be called at the start of the exercise or every time the exercise is submitted.
Import using
from protowhat.checks import update_bash_history_info
.
Electives¶
-
has_chosen
(state, correct, msgs)¶ Test multiple choice exercise.
Test for a MultipleChoiceExercise. The correct answer (as an integer) and feedback messages are passed to this function.
Parameters: - correct (int) – the index of the correct answer (should be an instruction). Starts at 1.
- msgs (list(str)) – a list containing all feedback messages belonging to each choice of the student. The list should have the same length as the number of options.
-
success_msg
(message)¶ Set the succes message of the sct. This message will be the feedback if all tests pass. :param message: A string containing the feedback message. :type message: str
-
allow_errors
(state)¶ Allow running the student code to generate errors.
This has to be used only once for every time code is executed or a different xwhat library is used. In most exercises that means it should be used just once.
Example: The following SCT allows the student code to generate errors:
Ex().allow_errors()
-
fail
(state, msg='fail')¶ Always fails the SCT, with an optional msg.
This function takes a single argument,
msg
, that is the feedback given to the student. Note that this would be a terrible idea for grading submissions, but may be useful while writing SCTs. For example, failing a test will highlight the code as if the previous test/check had failed.Example: As a trivial SCT example,
Ex().check_for_loop().check_body().fail()
This can also be helpful for debugging SCTs, as it can be used to stop testing as a given point.