config generation script #37

Closed
opened 2024-09-15 12:57:24 +08:00 by manuel · 59 comments
Owner

Goal: a script to generate joj config files

requirements:

  • docs: fully understand conf.toml specs and detail them in a wiki page
  • write a script to
    • import previous JOJ test cases
    • generate a valid and complete conf.toml file (could be interactive, eg. prompting for the score of a task, amount on memory, etc.)

note: the script could be 2 scripts, as import should occur once while generating a conf.toml might happen more often...

Goal: a script to generate joj config files requirements: - [x] docs: fully understand `conf.toml` specs and detail them in a wiki page - [ ] write a script to - import previous JOJ test cases - generate a valid and complete `conf.toml` file (could be interactive, eg. prompting for the score of a task, amount on memory, etc.) note: the script could be 2 scripts, as import should occur once while generating a `conf.toml` might happen more often...
manuel added the
enhancement
help wanted
component
UI
labels 2024-09-15 12:57:24 +08:00
李衍志523370910113 was assigned by manuel 2024-09-15 12:57:24 +08:00
王韵晨520370910012 was assigned by manuel 2024-09-15 12:57:24 +08:00

created repo at FOCS/joj-config-generator
since I'm not an administrator I can't create repo under FOCS-dev :(

created repo at [FOCS/joj-config-generator](https://focs.ji.sjtu.edu.cn/git/FOCS/joj-config-generator) since I'm not an administrator I can't create repo under *FOCS-dev* :(
Author
Owner

back to focs-dev :-)

back to focs-dev :-)

back to focs-dev :-)

this time it's me that have no permissions 🤯

> back to focs-dev :-) this time it's me that have no permissions 🤯

which repo should we edit the wiki page in? @manuel

which repo should we edit the wiki page in? @manuel
Author
Owner

sorry had forgotten to add repo to joj team... now fixed

write toml spec doc on joj repo, this is where it belongs. user doc can be in a readme in script repo

sorry had forgotten to add repo to joj team... now fixed write toml spec doc on joj repo, this is where it belongs. user doc can be in a readme in script repo

@manuel which repo?

toml specs in FOCS-dev/JOJ3, toml generator docs in our repo @jon-lee

> @manuel which repo? toml specs in [FOCS-dev/JOJ3](https://focs.ji.sjtu.edu.cn/git/FOCS-dev/JOJ3), toml generator docs in [our repo](https://focs.ji.sjtu.edu.cn/git/FOCS-dev/joj-config-generator) @jon-lee

@manuel

I have several questions:

  • about the doc, I should write all details about samples for parsers I guess. and what stages other than healthcheck run compile do we have?
  • so we fetch all the code in the repo to remote server to run joj, how to specifically give joj grade to certain homework based on the commit msg?
  • I guess we should have Makefile and dockerfile in the runner-image of remote server, but I only find:
tt@joj-test1:~/runner-image$ ls -al
total 24
drwxr-xr-x 3 tt   tt   4096 Sep 10 11:06 .
drwx------ 8 tt   tt   4096 Sep 10 11:28 ..
-rw-r--r-- 1 tt   tt   4803 Sep 10 04:10 config.yaml
drwxr-xr-x 2 root root 4096 Sep 10 04:14 data
-rwxr-xr-x 1 tt   tt    403 Sep 10 05:56 register.sh

so what is the current situation of remote server?

@manuel I have several questions: - about the doc, I should write all details about samples for parsers I guess. and what stages other than `healthcheck` `run` `compile` do we have? - so we fetch all the code in the repo to remote server to run joj, how to specifically give joj grade to certain homework based on the commit msg? - I guess we should have `Makefile` and `dockerfile` in the `runner-image` of remote server, but I only find: ``` tt@joj-test1:~/runner-image$ ls -al total 24 drwxr-xr-x 3 tt tt 4096 Sep 10 11:06 . drwx------ 8 tt tt 4096 Sep 10 11:28 .. -rw-r--r-- 1 tt tt 4803 Sep 10 04:10 config.yaml drwxr-xr-x 2 root root 4096 Sep 10 04:14 data -rwxr-xr-x 1 tt tt 403 Sep 10 05:56 register.sh ``` so what is the current situation of remote server?
Author
Owner
  • about the doc, I should write all details about samples for parsers I guess. and what stages other than healthcheck run compile do we have?

fully dacument the specifications of the files, ie

  • the general file structure
  • what sections exists
  • the record names and types etc.

\to this is for develiopment purpose

  • so we fetch all the code in the repo to remote server to run joj, how to specifically give joj grade to certain homework based on the commit msg?

this is a different issue. this issue is about generating a conf.toml file

  • I guess we should have Makefile and dockerfile in the runner-image of remote server, but I only find:
tt@joj-test1:~/runner-image$ ls -al
total 24
drwxr-xr-x 3 tt   tt   4096 Sep 10 11:06 .
drwx------ 8 tt   tt   4096 Sep 10 11:28 ..
-rw-r--r-- 1 tt   tt   4803 Sep 10 04:10 config.yaml
drwxr-xr-x 2 root root 4096 Sep 10 04:14 data
-rwxr-xr-x 1 tt   tt    403 Sep 10 05:56 register.sh

so what is the current situation of remote server?

not sure what you ask, but this is irrelevant to this issue. here you're simply supposed to write code to generate a valid conf.toml file. check the sample you'll notice patterns, common parts, etc.

identify what parts should be filled in by the user (eg. asking questions or providing cli arguments) and generate a file.

> - about the doc, I should write all details about samples for parsers I guess. and what stages other than `healthcheck` `run` `compile` do we have? fully dacument the specifications of the files, ie - the general file structure - what sections exists - the record names and types etc. $\to$ this is for develiopment purpose > - so we fetch all the code in the repo to remote server to run joj, how to specifically give joj grade to certain homework based on the commit msg? this is a different issue. this issue is about generating a `conf.toml` file > - I guess we should have `Makefile` and `dockerfile` in the `runner-image` of remote server, but I only find: > ``` > tt@joj-test1:~/runner-image$ ls -al > total 24 > drwxr-xr-x 3 tt tt 4096 Sep 10 11:06 . > drwx------ 8 tt tt 4096 Sep 10 11:28 .. > -rw-r--r-- 1 tt tt 4803 Sep 10 04:10 config.yaml > drwxr-xr-x 2 root root 4096 Sep 10 04:14 data > -rwxr-xr-x 1 tt tt 403 Sep 10 05:56 register.sh > ``` > so what is the current situation of remote server? not sure what you ask, but this is irrelevant to this issue. here you're simply supposed to write code to generate a valid `conf.toml` file. check the sample you'll notice patterns, common parts, etc. identify what parts should be filled in by the user (eg. asking questions or providing cli arguments) and generate a file.

this is a different issue. this issue is about generating a conf.toml file

I guess it is done in demo.yaml file by fetching all the code for one line?

like

          fetch-depth: 0
> this is a different issue. this issue is about generating a conf.toml file I guess it is done in `demo.yaml` file by fetching all the code for one line? like ``` fetch-depth: 0 ```

@manuel :[

not sure what you ask, but this is irrelevant to this issue. here you're simply supposed to write code to generate a valid conf.toml file. check the sample you'll notice patterns, common parts, etc.

I think I need to know more about whether it is flexible? the things I want to be clarified is that: every time a homework/project pushed to the repo, joj will check all of the stages one by one, and use parser, if one homework is missing, give 0. So its like giving score to the code of the whole repo? Things bit change from last year according to my memory.


If so, things easier to handle.

@manuel :[ > not sure what you ask, but this is irrelevant to this issue. here you're simply supposed to write code to generate a valid conf.toml file. check the sample you'll notice patterns, common parts, etc. I think I need to know more about whether it is flexible? the things I want to be clarified is that: every time a homework/project pushed to the repo, joj will check all of the stages one by one, and use parser, if one homework is missing, give 0. So its like giving score to the code of the whole repo? Things bit change from last year according to my memory. --- If so, things easier to handle.

no, grading is handled by go-judge and joj3

basically what TA needs to do is to determine the stages for joj3 to run, e.g. first compile, then healthcheck, then send to go-judge, during the process memory and cpu time is limited, etc. so a tool is needed to simplify TA's work, such that they don't need to write toml themselves, instead they input

compiler = g++
time limit = 1s
cpu limit = 100%

then we generate

[[stages]]
[stages.command]
command="/usr/bin/g++"
......
no, grading is handled by `go-judge` and `joj3` basically what TA needs to do is to determine the stages for `joj3` to run, e.g. first compile, then healthcheck, then send to `go-judge`, during the process memory and cpu time is limited, etc. so a tool is needed to simplify TA's work, such that they don't need to write toml themselves, instead they input ``` compiler = g++ time limit = 1s cpu limit = 100% ``` then we generate ```toml [[stages]] [stages.command] command="/usr/bin/g++" ...... ```

There is no need to change the content of demo.yaml. It just checkouts the repo and runs two commands.

There is no need to change the content of demo.yaml. It just checkouts the repo and runs two commands.

you can check JOJ3-actions-examples to see what is joj3 doing

you can check [JOJ3-actions-examples](https://focs.ji.sjtu.edu.cn/git/FOCS-dev/JOJ3-actions-examples) to see what is `joj3` doing

what I am docing right now is giving interpretation to

[[stages]]
[stages.command]
command="/usr/bin/g++"
......

like things

what I am docing right now is giving interpretation to > ```toml > [[stages]] > [stages.command] > command="/usr/bin/g++" > ...... > ``` like things

you can check JOJ3-actions-examples to see what is joj3 doing

OK, I will take a more close look.

> you can check JOJ3-actions-examples to see what is joj3 doing OK, I will take a more close look.

Sandbox executor always sends to go-judge. Stages run one by one. Each stage contains an executor that runs command and give the command output, and a parser that parse the output and generate score and comment.

Sandbox executor always sends to go-judge. Stages run one by one. Each stage contains an executor that runs command and give the command output, and a parser that parse the output and generate score and comment.
Author
Owner

@nuvole check toml file format specs, this will be helpful to undertsand you to interpret the file content

@nuvole check toml file format specs, this will be helpful to undertsand you to interpret the file content

Sandbox executor always sends to go-judge. Stages run one by one. Each stage contains an executor that runs command and give the command output, and a parser that parse the output and generate score and comment.

OK, it seems my previous understanding is right, so what I need to doc now is to write the format for specifically things in conf.toml

> Sandbox executor always sends to go-judge. Stages run one by one. Each stage contains an executor that runs command and give the command output, and a parser that parse the output and generate score and comment. OK, it seems my previous understanding is right, so what I need to doc now is to write the format for specifically things in `conf.toml`

what I need to doc now is to write the format for specifically things in conf.toml

yes

> what I need to doc now is to write the format for specifically things in `conf.toml` yes

@manuel @bomingzh what is the current situation of runner-image? guess we have switched our strategy, using action binary?

@manuel @bomingzh what is the current situation of runner-image? guess we have switched our strategy, using action binary?

i don't understand the question

i don't understand the question

so previously, as @zzjc123 mentioned to me that we may have makefile and dockerfile on the runner-image folder of the server. But now we have

tt@joj-test1:~/runner-image$ ls -al
total 24
drwxr-xr-x 3 tt   tt   4096 Sep 10 11:06 .
drwx------ 8 tt   tt   4096 Sep 16 16:47 ..
-rw-r--r-- 1 tt   tt   4803 Sep 10 04:10 config.yaml
drwxr-xr-x 2 root root 4096 Sep 10 04:14 data
-rwxr-xr-x 1 tt   tt    403 Sep 10 05:56 register.sh

what have we changed? and is that matter to the change of runner-image? I think I need to know more detail about what happens on the server.


and also, about the dummy and keyword parser.

so previously, as @zzjc123 mentioned to me that we may have makefile and dockerfile on the `runner-image` folder of the server. But now we have ``` tt@joj-test1:~/runner-image$ ls -al total 24 drwxr-xr-x 3 tt tt 4096 Sep 10 11:06 . drwx------ 8 tt tt 4096 Sep 16 16:47 .. -rw-r--r-- 1 tt tt 4803 Sep 10 04:10 config.yaml drwxr-xr-x 2 root root 4096 Sep 10 04:14 data -rwxr-xr-x 1 tt tt 403 Sep 10 05:56 register.sh ``` what have we changed? and is that matter to the change of `runner-image`? I think I need to know more detail about what happens on the server. --- and also, about the `dummy` and `keyword` parser.

makefile and dockerfile on the runner-image folder of the server.

it is outdated workflow now I think.

> makefile and dockerfile on the runner-image folder of the server. it is outdated workflow now I think.

yes, so what should be the current workflow

yes, so what should be the current workflow

Everything is run inside a container through Gitea actions. runner-image is a repo used to build the image for that container. I do not think anything inside tt@joj-test1 will affect any future work. The working config is in tt@engr151-24fa, not tt@joj-test1.

Everything is run inside a container through Gitea actions. `runner-image` is a repo used to build the image for that container. I do not think anything inside `tt@joj-test1` will affect any future work. The working config is in `tt@engr151-24fa`, not `tt@joj-test1`.
Author
Owner

i don't think config file generation has anything to do with "where" or "how" things are run.
you should simply write a short script with can interactively ask for info or read it from a file (eg. what language do you use? (then display available choices) or read LANG=c++ from a file).

for reading from a file, an env file or toml file could both be acceptable i guess? @bomingzh did you have any specific format in mind?

overall the goal of this script/issue is to prepare a config file for JOJ to be able to run. How it will run is in "another layer". so assume all software are correctly installed. Your output file (JOJ toml config) will be parsed and tasks run based on its content. You don't need to worry about anything else :-)

i don't think config file generation has anything to do with "where" or "how" things are run. you should simply write a short script with can interactively ask for info or read it from a file (eg. what language do you use? (then display available choices) or read `LANG=c++` from a file). for reading from a file, an `env` file or toml file could both be acceptable i guess? @bomingzh did you have any specific format in mind? overall the goal of this script/issue is to prepare a config file for JOJ to be able to run. How it will run is in "another layer". so assume all software are correctly installed. Your output file (JOJ toml config) will be parsed and tasks run based on its content. You don't need to worry about anything else :-)

OK, so here is what I planed to do: write two scripts, one for matlab and one for C and C++

  • for matlab one, we have stage:
- healthcheck
- run

where we use healthcheck parser for healthcheck stage, and diff parser for the run stage.

  • for C and C++, we have stage:
- healthcheck
- code-check
- compile
- run

where we have healthcheck parser for healthcheck stage, clang-tidy, cpplint, cppcheck for code-check stage, result-status for compile stage, diff for run stage

anything to add or anything wrong? @manuel @bomingzh

OK, so here is what I planed to do: write two scripts, one for matlab and one for C and C++ - for matlab one, we have stage: ``` - healthcheck - run ``` where we use `healthcheck` parser for `healthcheck` stage, and `diff` parser for the run stage. - for C and C++, we have stage: ``` - healthcheck - code-check - compile - run ``` where we have `healthcheck` parser for `healthcheck` stage, `clang-tidy`, `cpplint`, `cppcheck` for `code-check` stage, `result-status` for `compile` stage, `diff` for `run` stage anything to add or anything wrong? @manuel @bomingzh

JOJ3 currently use https://github.com/koding/multiconfig with no limitation, so according to the docs,

Multiconfig is able to read configuration automatically based on the given struct's field names from the following sources:

  • Struct tags
  • TOML file
  • JSON file
  • YAML file
  • Environment variables
  • Flags

The script should output any of these formats. If we need to migrate from JOJ1, then the input file format should be yaml.

JOJ3 currently use https://github.com/koding/multiconfig with no limitation, so according to the docs, > Multiconfig is able to read configuration automatically based on the given struct's field names from the following sources: > > - Struct tags > - TOML file > - JSON file > - YAML file > - Environment variables > - Flags The script should output any of these formats. If we need to migrate from JOJ1, then the input file format should be yaml.
Author
Owner

OK, so here is what I planed to do: write two scripts, one for matlab and one for C and C++

No! Your script must be generic and apply to any case. some assignment could even have more than 1 language.
this script will be used in all JI courses using JOJ, not only in 151.

so you script could ask for what plugins will be needed (code quality (which tools), compilation "requirements", language, etc.) then generate a conf.toml which is suitable for the assignment.

the script should cover all possible combinations (eg. FOCS-dev/JOJ3#14 (comment))

> OK, so here is what I planed to do: write two scripts, one for matlab and one for C and C++ No! Your script **must** be generic and apply to any case. some assignment could even have more than 1 language. this script will be used in all JI courses using JOJ, not only in 151. so you script could ask for what plugins will be needed (code quality (which tools), compilation "requirements", language, etc.) then generate a `conf.toml` which is suitable for the assignment. the script should cover all possible combinations (eg. https://focs.ji.sjtu.edu.cn/git/FOCS-dev/JOJ3/issues/14#issuecomment-348089)
Author
Owner

@bomingzh

The script should output any of these formats. If we need to migrate from JOJ1, then the input file format should be yaml.

how similar are joj1 1and joj3 config files? (in term of fields/features compatibility)? is it best to "import JOJ1" and "fix it" or start from scratch with a new config?

maybe the judging part can be imported and the rest generated from scratch? (JOJ1 had no code quality or repo health feature/config)

i guess we would still need a "creating from scratch" option as news courses/assignments might appear? (maybe can be a feature for later as for now courses mostly need import (aside of 477 latex compilation))

@bomingzh > The script should output any of these formats. If we need to migrate from JOJ1, then the input file format should be yaml. how similar are joj1 1and joj3 config files? (in term of fields/features compatibility)? is it best to "import JOJ1" and "fix it" or start from scratch with a new config? maybe the judging part can be imported and the rest generated from scratch? (JOJ1 had no code quality or repo health feature/config) i guess we would still need a "creating from scratch" option as news courses/assignments might appear? (maybe can be a feature for later as for now courses mostly need import (aside of 477 latex compilation))

@manuel

how similar are joj1 1and joj3 config files? (in term of fields/features compatibility)? is it best to "import JOJ1" and "fix it" or start from scratch with a new config?

It should be similar. The core parts are

  1. commands to compile, commands to run
  2. executable to transfer between stages
  3. case limitation on time & space
  4. case input & output

JOJ1:

languages:
  - language: llvm-c
    compiler_args: >-
      clang -O2 -Wall -pedantic -Werror -Wno-unused-result
      -fsanitize=address -fno-omit-frame-pointer -fsanitize=undefined
      -std=c11 -o /out/main /in/src/main.c
      /in/l6.c -lm -I /in/src      
    code_file: main.c
    execute_file: main
    execute_args: main
compile_time_files: src/
runtime_files: env/
cases:
  - time: 1s
    memory: 32m
    score: 10
    input: case0.in
    output: case0.out

JOJ3:

[[stages]]
name = "compile"
[stages.executor]
name = "sandbox"
[stages.executor.with.default]
args = ["clang++", "main.c", "-o", "main"]
env = ["PATH=/usr/bin:/bin"]
cpuLimit = 10_000_000_000
memoryLimit = 104_857_600
procLimit = 50
copyInCwd = true
copyOutCached = ["main"]
[stages.parser]
name = "result-status"
[[stages]]
name = "run"
[stages.executor]
name = "sandbox"
[stages.executor.with.default]
args = ["./main"]
env = ["PATH=/usr/bin:/bin"]
cpuLimit = 1_000_000_000
memoryLimit = 104_857_600
procLimit = 50
copyOut = ["stdout", "stderr"]
[stages.executor.with.default.stdout]
name = "stdout"
max = 4_096
[stages.executor.with.default.stderr]
name = "stderr"
max = 4_096
[stages.executor.with.default.copyInCached]
main = "main"
[[stages.executor.with.cases]]
[stages.executor.with.cases.stdin]
src = "./case0.in"
[stages.parser]
name = "diff"
[[stages.parser.with.cases]]
[[stages.parser.with.cases.outputs]]
score = 10
fileName = "stdout"
answerPath = "./case0.out"
ignoreWhitespace = true
@manuel > how similar are joj1 1and joj3 config files? (in term of fields/features compatibility)? is it best to "import JOJ1" and "fix it" or start from scratch with a new config? It should be similar. The core parts are 1. commands to compile, commands to run 2. executable to transfer between stages 3. case limitation on time & space 4. case input & output JOJ1: ```yaml languages: - language: llvm-c compiler_args: >- clang -O2 -Wall -pedantic -Werror -Wno-unused-result -fsanitize=address -fno-omit-frame-pointer -fsanitize=undefined -std=c11 -o /out/main /in/src/main.c /in/l6.c -lm -I /in/src code_file: main.c execute_file: main execute_args: main compile_time_files: src/ runtime_files: env/ cases: - time: 1s memory: 32m score: 10 input: case0.in output: case0.out ``` JOJ3: ```toml [[stages]] name = "compile" [stages.executor] name = "sandbox" [stages.executor.with.default] args = ["clang++", "main.c", "-o", "main"] env = ["PATH=/usr/bin:/bin"] cpuLimit = 10_000_000_000 memoryLimit = 104_857_600 procLimit = 50 copyInCwd = true copyOutCached = ["main"] [stages.parser] name = "result-status" [[stages]] name = "run" [stages.executor] name = "sandbox" [stages.executor.with.default] args = ["./main"] env = ["PATH=/usr/bin:/bin"] cpuLimit = 1_000_000_000 memoryLimit = 104_857_600 procLimit = 50 copyOut = ["stdout", "stderr"] [stages.executor.with.default.stdout] name = "stdout" max = 4_096 [stages.executor.with.default.stderr] name = "stderr" max = 4_096 [stages.executor.with.default.copyInCached] main = "main" [[stages.executor.with.cases]] [stages.executor.with.cases.stdin] src = "./case0.in" [stages.parser] name = "diff" [[stages.parser.with.cases]] [[stages.parser.with.cases.outputs]] score = 10 fileName = "stdout" answerPath = "./case0.out" ignoreWhitespace = true ```

another question: for each homework for example, do we have only one toml file or multiple toml files for each of the exercise?


and how do we ensure that, only one stage for healthcheck for all if for multiple files case?

another question: for each homework for example, do we have only one `toml` file or multiple `toml` files for each of the exercise? --- and how do we ensure that, only one stage for healthcheck for all if for multiple files case?
Author
Owner

the initial idea was:

  • toml for high level config (user friendly)
  • json for internal config (computer friendly)

seems our toml is now a bit complicated, so we need to auto-generate it. we might want to think about it and see if json would not be better. but for now we can go with toml (unless switch is easy/fast). toml is nice for human editing and more "friendly" than yaml.

for judge part we can have a default toml or env config setup (eg. fixed score, time, mem), then it can be easily edited by TAs if needed. this initial high level simple file is then parsed by the generating script in order to get the "real" config file for the assignment. it would make sense that this "computer friendly" config file in one piece.

the high level config file could for instance simply list exercises that can be tested on JOJ (this is what we had last fall), it looked like

ASSIGNMENTS=( \
  [h0]="64fe78050124c3000638f9e7 650ace270124c30006390d5a other" \
  [h1]="64fff1130124c3000638fad5 -1 64ffee800124c3000638fac6 64ffeeff0124c3000638fac9 64ffef320124c3000638facc 64ffef860124c3000638facf 64ffefc10124c3000638fad2 -1 matlab" \
  [h2]="651bdf090124c30006396ef0 -1 6505413e0124c3000638fbe2 -1 6505413f0124c3000638fbe5 -1 650541400124c3000638fbe8 matlab" \
)

if you take h1 it feature the JOJ1 id for each exercise (-1) means not available to test on JOJ, and last element of the line specifies the language.

this type of thing is easy to write for a human in an env or toml file. with toml we can be a bit more specific for each execise: either we provide mem/time limitations or default ones are applied. we could also specify what test cases apply to what exercise, etc.

in the end we would get a single long/complex generated file pr hw (computer firiendly) while keeping basic config easy to handle for humans. this script is the bridge between the 2

@bomingzh is it what you also had in mind?

the initial idea was: - toml for high level config (user friendly) - json for internal config (computer friendly) seems our toml is now a bit complicated, so we need to auto-generate it. we might want to think about it and see if json would not be better. but for now we can go with toml (unless switch is easy/fast). toml is nice for human editing and more "friendly" than yaml. for judge part we can have a default toml or env config setup (eg. fixed score, time, mem), then it can be easily edited by TAs if needed. this initial high level simple file is then parsed by the generating script in order to get the "real" config file for the assignment. it would make sense that this "computer friendly" config file in one piece. the high level config file could for instance simply list exercises that can be tested on JOJ (this is what we had last fall), it looked like ``` ASSIGNMENTS=( \ [h0]="64fe78050124c3000638f9e7 650ace270124c30006390d5a other" \ [h1]="64fff1130124c3000638fad5 -1 64ffee800124c3000638fac6 64ffeeff0124c3000638fac9 64ffef320124c3000638facc 64ffef860124c3000638facf 64ffefc10124c3000638fad2 -1 matlab" \ [h2]="651bdf090124c30006396ef0 -1 6505413e0124c3000638fbe2 -1 6505413f0124c3000638fbe5 -1 650541400124c3000638fbe8 matlab" \ ) ``` if you take `h1` it feature the JOJ1 id for each exercise (`-1`) means not available to test on JOJ, and last element of the line specifies the language. this type of thing is easy to write for a human in an `env` or `toml` file. with `toml` we can be a bit more specific for each execise: either we provide mem/time limitations or default ones are applied. we could also specify what test cases apply to what exercise, etc. in the end we would get a single long/complex generated file pr hw (computer firiendly) while keeping basic config easy to handle for humans. this script is the bridge between the 2 @bomingzh is it what you also had in mind?

For JOJ3, switch between yaml, toml, json is very easy.

in the end we would get a single long/complex generated file pr hw (computer firiendly) while keeping basic config easy to handle for humans. this script is the bridge between the 2

Agree. In the simplest case, let the script user only provide the compile command and run command, then the script should fill all the other values by default (1s, 32MB, 10 points), and detect all the cases input and output in a dir.

For JOJ3, switch between yaml, toml, json is very easy. > in the end we would get a single long/complex generated file pr hw (computer firiendly) while keeping basic config easy to handle for humans. this script is the bridge between the 2 Agree. In the simplest case, let the script user only provide the compile command and run command, then the script should fill all the other values by default (1s, 32MB, 10 points), and detect all the cases input and output in a dir.
Author
Owner

For JOJ3, switch between yaml, toml, json is very easy.

ok then lets do it now, otherwise we'll have to redo some work later...

Agree. In the simplest case, let the script user only provide the compile command and run command, then the script should fill all the other values by default (1s, 32MB, 10 points), and detect all the cases input and output in a dir.

TAs could only push that simple file, then config file generating would happen on the server (and pushed back to repo). so no need to install python packages or anything on their local computer

> For JOJ3, switch between yaml, toml, json is very easy. ok then lets do it now, otherwise we'll have to redo some work later... > Agree. In the simplest case, let the script user only provide the compile command and run command, then the script should fill all the other values by default (1s, 32MB, 10 points), and detect all the cases input and output in a dir. TAs could only push that simple file, then config file generating would happen on the server (and pushed back to repo). so no need to install python packages or anything on their local computer

What's current strategy for config? One config per hw or one config per question?

What's current strategy for config? One config per hw or one config per question?

afaik it's one per question

afaik it's one per question

we will have config files named h1-ex1, h1-ex2, etc.

we will have config files named `h1-ex1`, `h1-ex2`, etc.

If it is the case, we need h1-ex1-hc h1-ex1-cq h1-ex1-oj currently

If it is the case, we need h1-ex1-hc h1-ex1-cq h1-ex1-oj currently

i guess no need for -cq and -oj? if your commit msg parser can figure out what to do

i guess no need for *-cq* and *-oj*? if your commit msg parser can figure out what to do

What do you mean? For now commit msg parser just decide which config to read.

https://focs.ji.sjtu.edu.cn/git/FOCS-dev/JOJ3/src/branch/commit-parser/cmd/joj3/conf.go

What do you mean? For now commit msg parser just decide which config to read. https://focs.ji.sjtu.edu.cn/git/FOCS-dev/JOJ3/src/branch/commit-parser/cmd/joj3/conf.go

What do you mean? For now commit msg parser just decide which config to read.

okay, i will look into your parser later to figure out what we should do :)

> What do you mean? For now commit msg parser just decide which config to read. okay, i will look into your parser later to figure out what we should do :)

I think we might be able to fix everything if we use one config for each hw. (one config for each ex is also fine) without changing the mainImpl

e.g.

https://focs.ji.sjtu.edu.cn/git/FOCS-dev/JOJ3-examples/src/branch/diff/complex/conf.json

if we write in the following pattern then multiple exercise is possible to implement.

{
  "stages": [
    {
      "name": "compile-ex1",
      "executor": {...},
      "parser": {...}
    },
    {
      "name": "compile-ex2",
      "executor": {...},
      "parser": {...}
    },
    {other compile stages...},
    {
     "name":"run-ex1",
     "executor":{...},
     "parser":{...},
    },
    {
     "name":"run-ex2",
     "executor":{...},
     "parser":{...},
    },
    {other run stages...},
}
I think we might be able to fix everything if we use one config for each hw. (one config for each ex is also fine) without changing the mainImpl e.g. https://focs.ji.sjtu.edu.cn/git/FOCS-dev/JOJ3-examples/src/branch/diff/complex/conf.json if we write in the following pattern then multiple exercise is possible to implement. ```json { "stages": [ { "name": "compile-ex1", "executor": {...}, "parser": {...} }, { "name": "compile-ex2", "executor": {...}, "parser": {...} }, {other compile stages...}, { "name":"run-ex1", "executor":{...}, "parser":{...}, }, { "name":"run-ex2", "executor":{...}, "parser":{...}, }, {other run stages...}, } ```

The issue is that if we just brutally run all stages in sequence, the code in previous exercise might have side effect on the other exercise. So in this case:

  1. we may need to change the mainImpl finally and
  2. each config for each exercise should be much easier to implement.
The issue is that if we just brutally run all stages in sequence, the code in previous exercise might have side effect on the other exercise. So in this case: 1. we may need to change the `mainImpl` finally and 2. each config for each exercise should be much easier to implement.

What is the advantage of using fewer config files?

What is the advantage of using fewer config files?

I think we need to modify main s.t.

  1. get a list of conf from config parser
  2. run them independently in case they have some depencies to handle
  3. cleanup after running each exercise
  4. write into the same json file
I think we need to modify `main` s.t. 1. get a list of `conf` from config parser 2. run them independently in case they have some depencies to handle 3. cleanup after running each exercise 4. write into the same json file

What is the advantage of using fewer config files?

no idea, I write code for config per ex

> What is the advantage of using fewer config files? no idea, I write code for config per ex

As in JOJ1 and every other OJ, each exercise needs a submission. Why not do it in JOJ3?

As in JOJ1 and every other OJ, each exercise needs a submission. Why not do it in JOJ3?

We have this feature I think.

As in JOJ1 and every other OJ, each exercise needs a submission. Why not do it in JOJ3?

We have this feature I think. >As in JOJ1 and every other OJ, each exercise needs a submission. Why not do it in JOJ3?

We can add a pre_run field in conf.json for less duplication. The stages from that pre_run file should be added to the beginning of the current file.

Now the first priority should be a full map from each keyword to a combination of types of config files needed, and see if a simple pre_run is enough.

We can add a `pre_run` field in `conf.json` for less duplication. The stages from that `pre_run` file should be added to the beginning of the current file. Now the first priority should be a full map from each keyword to a combination of types of config files needed, and see if a simple `pre_run` is enough.

But will it be safer and reject strange bugs if we take this strategy? Students might submit multiple exercise at the same time I think

get a list of conf from config parser
run them independently in case they have some depencies to handle
cleanup after running each exercise
write into the same json file

But will it be safer and reject strange bugs if we take this strategy? Students might submit multiple exercise at the same time I think > get a list of conf from config parser > run them independently in case they have some depencies to handle > cleanup after running each exercise > write into the same json file

No, one commit, one exercise submission. They should not create that kind of commits.

No, one commit, one exercise submission. They should not create that kind of commits.

got it.

got it.

We can even make a config file for config files. Just let JOJ3 read that meta config file. Then it can map any kind of commit messages to one real config.json. So for each course, they can have different types of commit message requirements. Like some courses in the future, only use gitea as OJ, then every commit can trigger compile+diff, dropping all the other parts.

We can even make a config file for config files. Just let JOJ3 read that meta config file. Then it can map any kind of commit messages to one real config.json. So for each course, they can have different types of commit message requirements. Like some courses in the future, only use gitea as OJ, then every commit can trigger compile+diff, dropping all the other parts.
Author
Owner

No, one commit, one exercise submission. They should not create that kind of commits.

for grading (eg. on a release) we want to have a JOJ score for teh whole hw. joj: h3 should be able to run JOJ on all exercises from h3. every execise is graded individually and 1 JOJ commit per exercise is fine, but 1 commit per hw shold also be fine.

we can however drop joj: h3 4 7 (joj on h3 ex. 4 and 7) if you think this is not a good idea.

> No, one commit, one exercise submission. They should not create that kind of commits. for grading (eg. on a release) we want to have a JOJ score for teh whole hw. `joj: h3` should be able to run JOJ on all exercises from h3. every execise is graded individually and 1 JOJ commit per exercise is fine, but 1 commit per hw shold also be fine. we can however drop `joj: h3 4 7` (joj on h3 ex. 4 and 7) if you think this is not a good idea.

Then we might need for some adjustment for following problem

But will it be safer and reject strange bugs if we take this strategy? Students might submit multiple exercise at the same time I think

Then we might need for some adjustment for following problem >But will it be safer and reject strange bugs if we take this strategy? Students might submit multiple exercise at the same time I think

We need to design new things to handle the whole hw JOJ score on release. It is not implemented and even considered in the current architecture.

We need to design new things to handle the whole hw JOJ score on release. It is not implemented and even considered in the current architecture.

We can trigger the run of JOJ3 of the whole homework on release using gitea actions. But now JOJ3 can not take the release name as an input.

We can trigger the run of JOJ3 of the whole homework on release using gitea actions. But now JOJ3 can not take the release name as an input.
Author
Owner

generator script should check if conf.yaml file exists. if so then use it to import from joj1 config, otherwise generate a new file from conf.toml config file

generator script should check if `conf.yaml` file exists. if so then use it to import from joj1 config, otherwise generate a new file from conf.toml config file
Sign in to join this conversation.
No description provided.