1 [TA]JOJ3 Configuration
张泊明518370910136 edited this page 2025-05-14 05:37:52 +08:00

JOJ3 configuration documentation for TAs

Goals: simple configuration files, which can easily be manually edited by TAs. These files are then parsed by joj-config-generation script to generate final internal json configurations files used by JOJ3.

Levels:

  • repository: global repository configuration
  • assignment: eg. homework or project
  • task: eg. exercise or milestone

A task is composed of stages which are composed of one or more steps, eg. in stage "online-judge" each test-case can be viewed as a step

Background

Brief introduction to the structure of JOJ configuration and how it impacts students' submissions.

Configuration repository

All configuration is done through a course-joj repository. TAs connect to the repository, read the Readme.md, and follow instructions to setup JOJ3 for a course. If previous TAs have already prepared JOJ3, then only testing is required. Otherwise more setup is required.

The repository follow the server structure, so that deploying is can done easily using s simple script. To find JOJ configuration files go to home/tt/.config/joj. Each folder should correspond to a different assignment. This is where .toml files will have to be pushed in order to generate the corresponding JOJ3 conf.json files.

Commit messages

When using JOJ3, Commit messages must follow conventional commits format: type(scope): message, where scope is used to track the exact task which is being worked on.

Basic rules:

  • health check is always run first
  • all non JOJ stages are run on each push
  • JOJ is only triggered if joj appears in message

Scope usage:

  • scope must match JOJ configuration tree (home/tt/.config/joj subtree in course-joj repo), eg. if tree is h1/ex2 the scope must be h1/ex2, if tree is hw4/1 then scope is also hw4/1.
  • if a scope is invalid then return a warning containing the list of valid scopes and exit nicely (either this is a typo, or this an exercise that do not require coding)
  • as health check must always be run, scope should be checked after it

Refer to Introduction to JOJ3 for more details on JOJ3 usage.

TOML configuration format

All configurations files at human level must be written in TOML format. They will then be parsed to generate long and complete .json files that can be understantood by JOJ3. Refer to TOML reference guide. After writing a configuration file it is recommended to check its validity, eg. TOML Lint.

Converting the file into JSON format can help better visualize the structure which can be especially helpful when working with arrays of tables.

All JOJ3 configuration files can be found in course-joj repository under home/tt/.config/joj. For each repository that will be used in the course (eg. hw, p1, and p2), create a corresponding folder. This directory will be the joj-root for that repository, eg. home/tt/.config/joj/hw is the joj-root for the hw repositories, ie. where JOJ3 configurations for homework tasks will be setup.

Repository level configuration

The first and most simple file to write is repo.toml. The template below can be used as a starter. It contains the part of the configuration that will be used globally for all assignments and tasks. It should be saved in the joj-root configuration for the repository.

  • teaching_team [array of string]: TT members' jaccount
  • max_size [float]: maximum size allowed for a repo in MB
  • release_tags [array of string]: list of allowed release tags

[files]

  • whitelist.patterns [array of string]: patterns of files allowed in the repository
  • whitelist.file [string]: file containing student defined patterns of files. This option should not be enabled unless strictly necessary
  • required [array of string]: files that are written by students and must be found in the repository
  • immutable [array of string]: list all files managed by TT that students are forbidden to modify

Important:

  • it's impossible to disable health check
  • make whitelist.patterns very strict or students will push random files
  • unless you have no way to set or predict students' filename, do not use whitelist.file
  • put this repo.toml file in the "root directory" containing all the configuration for this repository, eg. /home/tt/.config/joj/hw for homework repository

Task level configuration

This configuration file will be used to generate the task level configuration of JOJ3. This file should therefore clearly describe what stages to run, how to run them, and what information to share back with students.

General options

Global options:

  • task [string]: name the task (eg. an exercise or project milestone)
  • release.stages [array of string]: list all stages to run on a release (default: [ ])
  • release.deadline [offset date-time]: RFC 3339 formatted date-time with offset
  • limit.cpu [int]: default maximum running time used for all stages in sec (default: 4)
  • limit.mem [int]: default maximum amount of RAM allowed for all stages in MB (default: 4)
  • limit.stdout [int]: default stdout size applicable to all stages in kB (default: 4)
  • limit.stderr [int]: default stderr size applicable to all stages in kB (default: 4)

Each stage is configured in a table. All parameters following a table definition belong to it until the next table is defined. [stagename] table: configuration for stage stagename

  • command [string]: command to run
  • files.import [array of string]: list of files to import to ensure the command runs as expected (eg. driver and header files needed for compilation), path starts from JOJ configuration directory (eg. to import /home/tt/.config/joj/tools/matlab-joj use path "tools/matlab-joj")
  • files.export [array of string]: list of generated files to export to ensure future commands run as expected (eg. binaries needed for online-judge stages)
  • name [string]: stage name to display (default: stagename)
  • parsers [array of string]: list of parsers to run on the output of command (default: [ "result-status" ])
  • limit.cpu [int]: maximum running time for the stage in sec (default: 4)
  • limit.mem [int]: maximum amount of RAM allowed for the stage in MB (default: 4)
  • limit.stdout [int]: default stdout size applicable to the stage in kB (default: 4)
  • limit.stderr [int]: default stderr size applicable to the stage in kB (default: 4)

Online judge options

While online judge stages work in a similar way as other stages, they often feature more than one step. For instance, while a compilation or code quality stage is composed of a single step where the code is compiled or analysed, an online judge stage can features many steps, each corresponding to a test-case. Therefore, on-top of the regular stage options, online-judge stages can be adjusted at "step level ", ie. for each test-case.

Any online-judge stage must feature the keyword judge, eg. judge, judge-base, asan-judge are all valid online-judge stage names while oj, run-base, and asan are not.

The following extra option is available for online-judge stages:

  • skip [array of string]: list of test cases to skip for this stage (default: [ ])

For each step, ie. test-case, the following configuration can be adjusted:

  • limit.cpu [int]: maximum running time for the stage in sec (default: 4)
  • limit.mem [int]: maximum amount of RAM allowed for the stage in MB (default: 4)

Parsers

Currently the following parsers are available:

  • Generic:
    • dummy: output the score and comment
    • keyword: catch keywords on any generic text output and can force quit on a match
    • result-detail: provide basic statistics on memory and CPU usage
    • result-status: check if the executor exit with status accepted, if not quit with error status
  • Code quality:
    • clangtidy: parse clang-tidy output for specified keywords
    • cppcheck: parse cppcheck output for specified keywords
    • cpplint: parse cpplint output for specified keywords
    • elf: parse elf output for specified keywords
  • Online judge
    • diff: difference between the output and a provided file content (commonly used for judge stage)

Parsers can be combined. For instance one might want to show the diff, result-status, and result-detail outputs for the online judge. Some parsers can also be further configured.

Dummy parser options

  • comment [string]: display any text
  • forcequit [boolean]: quit at the end of the stage if a test case fails (default: false)
  • score [int]: score for passing dummy stage (default: 0)

Notes.

  • Adding a dummy parser can be useful to add extra comments or separate parsers output.
  • The forcequit option can be used to prevent students from using a specific scope in their commit message.

Keyword parser

  • forcequit [boolean]: quit at the end of the stage if there is a match
  • keyword [array of string]: list of keywords to catch on stdout
  • weight [array of int]: corresponding weight for each keyword caught on stdout

Note. This parser can be used to catch words on the output of any code quality tool which doesn't already have a parser implement in JOJ3

Result-detail parser

  • exitstatus [boolean]: display exit status (default: false)
  • mem [boolean]: display the memory usage (default: true)
  • stderr [boolean]: display stderr messages (default: false)
  • stdout [boolean]: display stdout messages (default: false)
  • time [boolean]: display run time (default: true)

Result-status parser

  • comment [string]: comment to display on successful run
  • forcequit [boolean]: quit at the end of the stage if if the error status is not 0 (default: true)
  • score [int]: score to assign on a successful run (default: 0)

Note. The main benefit of this parser is that it quits on failure. It could be used to exit at the end of a failed compilation stage.

Clangtidy, cppcheck, and cpplint parsers

  • keyword [array of string]: list of keywords to catch on stdout
  • weight [array of int]: corresponding weight for each keyword caught on stdout
  • forcequit [boolean]: quit at the end of the stage if if the score is different from the initial on, ie. if issues were found (default: false)

Elf parser

Not ready yet.

Diff parser options

  • comment.pass [string]: comment to display when passing a test case (default: "🥳Passed!")
  • comment.fail [string]: comment to display when failing a test case (default: "🧐Failed...")
  • forcequit [boolean]: quit at the end of the stage if a test case fails (default: false)
  • output.hide [boolean]: hide diff output (default: false)
  • output.ignorespaces [boolean]: ignore white spaces in diff output (default: true)
  • score [int]: score awarded for passing the test case (default: 0)

Note. This parser can be configured and adjusted for each test-case (others parsers are configured at stage level)

Examples

Sample repository configuration.
# teaching_team = ["mac-wang", "jon-lee", "allen_wr"] # jaccounts

maxsize = 5 # 5MB repo max size
# releasetags = ["h1", "h2", "h3"] # list of valid release tags 

[files]
immutable = [".gitignore", ".gitattributes", ".gitea/workflows/push.yaml"] # readonly files
required = [ "Changelog.md", "Readme.md" ] # files that must be found
Sample basic task configuration for MATLAB where most default options are used.
task="hw3 ex5"

release.deadline = 2024-10-18 23:59:00+08:00

[[stages]]
name = "judge-base"
command="./matlab-joj ./h3/ex5.m"
files.import = [ "tools/matlab-joj", "tools/matlab_formatter.py" ] 
score = 100 

parsers = ["diff", "result-detail"]
result-detail.time = false
result-detail.mem = false
result-detail.stderr = true
Sample advanced task configuration for C where many defaults are overwritten.
# general task configuration
task="Homework 1 exercise 2" # task name

release.deadline = 2024-10-12 23:59:00+08:00
release.stages = [ "compile" ]

[[stages]]
name = "Compilation"
command = "make.sh" # eg. script running cmake commands  
files.import = [ "tools/make.sh", "src/main.c", "src/task.h", "srcCMakelist.txt" ] 
files.export = [ "driver", "p2", "p2-msan" ]
limit.cpu = 180 # p2 takes long to compile
limit.stderr = 128 

# compile parsers 
parsers = [ "result-detail", "dummy", "result-status" ]
result-status.comment = "Congratulations! Your code compiled successfully."
dummy.comment = "\n\n### Details\n"
result-detail.exitstatus = true
result-detail.stderr = true
result-detail.time = false
result-detail.mem = false

[[stages]]
name = "File length check"
command = "./file-length 500 400 *.c *.h"  # command to run
files.import = [ "tools/file-length" ]

parsers = [ "keyword", "dummy", "result-detail" ]
keyword.keyword = [ "max", "recommend"] # keywords caught by corresponding JOJ plugin
keyword.weight = [ 50, 20 ] # weight of each keyword
result-detail.exitstatus = true
result-detail.stderr = true
result-detail.time = false
result-detail.mem = false

[[stages]]
name = "Clang-tidy checks"
command = "run-clang-tidy-18 -header-filter=.* -quiet -load=/usr/local/lib/libcodequality.so -p build"
limit.stdout = 65

parsers = [ "clangtidy", "dummy", "result-detail" ]
clangtidy.keyword = [ "codequality-no-global-variables", "codequality-no-header-guard", "readability-function-size", "readability-duplicate-include", "readability-identifier-naming", "readability-redundant", "readability-misleading-indentation", "readability-misplaced-array-index", "cppcoreguidelines-init-variables", "bugprone-suspicious-string-compare", "google-global-names-in-headers", "clang-diagnostic", "clang-analyzer", "misc performance" ]
clangtidy.weight = [10, 10, 50, 10, 5, 5, 10, 5, 5, 8, 5, 5, 5, 5, 8]
dummy.comment = "\n\n### Details\n"
result-detail.exitstatus = true
result-detail.stdout = true
result-detail.time = false
result-detail.mem = false

[[stages]]
name = "Cppcheck check"
command = "cppcheck --template='{\"file\":\"{file}\",\"line\":{line}, \"column\":{column}, \"severity\":\"{severity}\", \"message\":\"{message}\", \"id\":\"{id}\"}' --force --enable=all --quiet ./"
limit.stderr = 65

parsers = [ "cppcheck", "dummy", "result-detail" ]
cppcheck.keyword = ["error", "warning", "portability", "performance", "style"]
cppcheck.weight = [20, 10, 15, 15, 10]
dummy.comment = "\n\n### Details\n"
result-detail.exitstatus = true
result-detail.stderr = true
result-detail.time = false
result-detail.mem = false

[[stages]]
name = "Cpplint check"
command = "cpplint --linelength=120 --filter=-legal,-readability/casting,-whitespace,-runtime/printf,-runtime/threadsafe_fn,-readability/todo,-build/include_subdir,-build/header_guard --recursive --exclude=build ."
limit.stdout = 65

parsers = [ "cpplint", "dummy", "result-detail" ]
cpplint.keyword = [ "runtime", "readability", "build" ]
cpplint.weight = [ 10, 20, 15]
dummy.comment = "\n\n### Details\n"
result-detail.exitstatus = true
result-detail.stdout = true
result-detail.time = false
result-detail.mem = false

[[stages]]
name = "judge-base"
command="./driver ./mumsh"
limit.cpu = 3 
limit.mem = 75 
score = 10 

parsers = ["diff", "dummy", "result-detail"]
dummy.comment = "\n\n### Details\n"
result-detail.exitstatus = true
result-detail.stderr = true

case4.score = 15
case4.limit.cpu = 30
case4.limit.mem = 10
case4.limit.stdout = 8

case5.score = 25

case8.limit.stderr = 128

[[stages]]
name = "judge-msan"
command="./driver ./mumsh-msan"
limit.cpu = 10 # default cpu limit (in sec) for each test case
limit.mem = 500 # set default mem limit (in MB) for all OJ test cases
score = 10
skip = ["case0", "case11"]

parsers = ["diff", "dummy", "result-detail"]
dummy.comment = "\n\n### Details\n"
result-detail.exitstatus = true
result-detail.stderr = true

case4.score = 15
case4.limit.cpu = 30
case4.limit.mem = 10

case5.diff.output.ignorespaces = false

case6.diff.output.hide = true