Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
A
async-mpst-gen-choice
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package Registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Felix Stutz
async-mpst-gen-choice
Commits
61d04abd
Commit
61d04abd
authored
1 year ago
by
Felix Stutz
Browse files
Options
Downloads
Patches
Plain Diff
Update README for docker
parent
2e43c978
No related branches found
Branches containing commit
No related tags found
Tags containing commit
1 merge request
!1
Subset projection
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
README.md
+65
-5
65 additions, 5 deletions
README.md
evaluation_functionality/evaluation_config.py
+1
-1
1 addition, 1 deletion
evaluation_functionality/evaluation_config.py
with
66 additions
and
6 deletions
README.md
+
65
−
5
View file @
61d04abd
...
...
@@ -63,12 +63,72 @@ Only if `--overwrite` is given, the tool will overwrite existing files.
## Reproducing the Evaluation Results
We provide a docker image to reproduce the results.
[TODO]
###
How to Setup
###
Resource Requirements and Expected Run Time
### How to Run
This evaluation was originally conducted on a laptop with a 11th Gen Intel® Core™ i7-1165G7 2.80GHz with 32GB of RAM.
However, the computation was not parallelized and the maximal RAM usage was less than 500MB.
The run time was less than 6 minutes.
[TODO: these numbers are not within docker but directly. ADAPT]
### How to Inspect
### Resource Requirements
### How to Setup and Run
We first load the docker image
`docker-tool-cav`
from the
`tar`
-files.
```
docker load < docker-tool-cav.tar
```
We run it in interactive mode and provide an absolute path
`/absolute/path/to/local/result/folder`
,
which will be mapped to the results folder in the docker container, and will, thus, later contain the results.
(This might require
`sudo`
.)
```
docker run -v /absolute/path/to/local/result/folder:/tool-cav/evaluation_results -it docker-tool-cav bash
```
Then, we first change directory to the tool.
```
cd tool-cav/
```
Last, we run the evaluation script.
```
./evaluation_functionality/script_evaluation.sh
```
While computing, it reports in which step it is:
```
1) Computing results for subset projection.
2) Computing results for classical projection.
3) Computing results for state space explosion analysis.
```
If each command was successful, it prints:
```
Computing requested results was successful.
```
### How to Inspect the Results
The results of the evaluation can be found in
`evaluation_results`
.
The file
`table_projection_subset.txt`
and
`table_projection_classical.txt`
contain the results of projecting various global types
using the respective projection operator.
Combined, they provide the data as presented in Table 1.
For both approaches, we provid the expected result of the projection (in
`EvalSubsetProjection.py`
and
`EvalClassicalProjection.py`
)
and check against this, as indicated by the last line in each table.
If some projection was not defined, we report -1 as their sum.
Note that the size for global types is computed slightly differently for both approaches.
For the classical projection, the size is computed purely syntactically, accounting for identical subterms multiple times.
For the subset projection, however, we consider the resulting finite state machine, i.e., the number of states as well as the number of transitions,
and, thus, do not account for identical subterms several times.
The file
`plot_state_space_explosion.pdf`
provides a plot as in Figure 4 in which sizes are determined using finite state machines.
The example comprises a family of global types G_n.
While Figure 4 reports results for n < 20, the default configuration does only report for n < 17.
This is considerably faster while showing the state space explosion.
Still, the size of the global type can be changed in
`evaluation_config.py`
.
The intermediate results of the evaluation can be found in
`evaluation_data`
for inspection.
This diff is collapsed.
Click to expand it.
evaluation_functionality/evaluation_config.py
+
1
−
1
View file @
61d04abd
...
...
@@ -3,7 +3,7 @@ MAX_NUM_RUNS = 1000
DEFAULT_MAX_SIZE_OVERHEAD
=
250
DEFAULT_MAX_SIZE_STATE_EXP
=
175
TIMEOUT_SCALE
=
1000
*
60
*
1
0
#
1
0 minutes
TIMEOUT_SCALE
=
1000
*
60
*
6
0
#
6
0 minutes
PREFIX_TYPES
=
"
global_types/projectability/
"
PREFIX_PARAMETRIC_TYPES
=
"
global_types/parametric/
"
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment