Skip to content
Snippets Groups Projects
Forked from Iris / Iris
Source project has a limited visibility.

Shared Iris CI

This repository contains the shared Iris CI. It contains common code for Iris projects' CI.

Projects automatically use the latest commit of some branch for their CI, so branches should remain reasonably compatible. Every branch here comes with a corresponding branch in the Docker repo, providing the Docker image that should be used. Currently, we have the following branches:

Pushing to master here will not be picked up by the projects, always remember to also push to the right branch!

How to configure CI for your project

Inside your project, create a .gitlab-ci.yml with the following prelude, substituting \REPO for the name of the repository (e.g. iris/stdpp):

image: ralfjung/opam-ci:opam2

stages:
  - build

variables:
  CPU_CORES: "10"
  OCAML: "ocaml-variants.4.14.0+options ocaml-option-flambda"

.template: &template
  stage: build
  tags:
  - fp
  script:
  - git clone https://gitlab.mpi-sws.org/iris/ci.git ci -b opam2
  - ci/buildjob
  cache:
    key: "$CI_JOB_NAME"
    paths:
    - _opam/
  only:
  - master@\REPO
  - /^ci/@\REPO
  except:
  - triggers
  - schedules
  - api

## Build jobs

This sets up a template that we will use for the jobs. The tags determine the CI runner that will pick up these jobs; only and except make this run only when things get pushed to master and branches whose name start with ci and not when the pipeline gets triggered though the API or an automatic schedule. See the GitLab CI documentation for more information.

Below this, you can configure your build jobs. A normal job looks like this:

build-coq.8.15.1:
  <<: *template
  variables:
    OPAM_PINS: "coq version 8.15.1"

This builds your project against Coq 8.15.1. The first line determines the job name shown in GitLab. The second line imports the template we defined in the prelude. The last two lines configure the job; OPAM_PINS sets up the pins for the environment that the job will be built in. You can add CI_COQCHK: "1" to run coqchk after the build is done; be aware that can be very slow and hence use a lot of CI time.

See buildjob for a list of available configuration variables. Some of them require a secret or a key to be configured for the project; these are passed in as "Environment variables" setup in the GitLab UI.

The most versatile variable is OPAM_PINS, which configures the packages to pin before installing dependencies: Each package can be:

  • a group of three list elements: PACKAGE KIND TARGET.
  • a single element denoting a git repository, starting with git+https://.

Below, we describe some of the things that are common amongst Iris build jobs.

Nightly builds

To make sure that a project keeps working as we update Iris, we can set up nightly builds against the latest Iris version, and often we pin that to a branch of Coq to make sure we keep working with the next patch release. To this end, add a job like this:

build-iris.dev:
  <<: *template
  variables:
    STDPP_REPO: "iris/stdpp"
    IRIS_REPO: "iris/iris"
    OPAM_PINS: "coq version 8.13.dev   git+https://gitlab.mpi-sws.org/$STDPP_REPO#$STDPP_REV   git+https://gitlab.mpi-sws.org/$IRIS_REPO#$IRIS_REV"
  except:
  only:
  - triggers
  - schedules
  - api

The last four lines configure this job to not run when new commits are pushed to the repo, and instead make it run when a pipeline is triggered through the REST API or by a schedule. The pins are set up to install a development version of all the dependencies, and the 8.13 branch of Coq.

The STDPP_REV and IRIS_REV variables can be set via an API request to test this repository against different branches of std++ or Iris; STDPP_REPO and IRIS_REPO can be set to fetch those branches from forks (see the build-all script in the Iris repository.)

Now ask an admin to set up a pipeline schedule. We usually only do nightly builds for projects in the iris group; those schedules should be "owned" by the iris-dev user so that the Iris development team gets the emails about broken builds. This can be achieved by having an admin follow these steps:

  • At https://gitlab.mpi-sws.org/admin/users/iris-dev, click "Impersonate".
  • Go to the schedule settings for the project you are setting up, and add the schedule. It will be owned by iris-dev. If you already created the schedule, you can "Take ownership" to make iris-dev own it.
  • Click the button in the top-right corner to stop impersonating iris-dev.

Timing

To track performance of the project, one of the jobs should look like this:

build-coq.8.10.0:
  <<: *template
  variables:
    OPAM_PINS: "coq version 8.10.0"
  tags:
  - fp-timing

The last two lines makes this job run on the more isolated runner. This automatically triggers measuring the performance of building every single file of this project. This information is submitted to the coq-speed server where you can browse it in a web UI. The "configuration" is set to the job name (i.e., build-coq.8.10.0 in the example above).

This requires setting the TIMING_SECRET CI variable. Talk to an admin about this. The variable should not be protected because we do timing measurements on all branches, not just master!

You also need to enable the coop-timing runner for your project. Again this needs to be done by an admin.

Behind the scenes, the data is sent to our custom webhook, which runs on coq-speed.mpi-sws.org and locally stores the data in a PostgreSQL database.

Ad-hoc timing

By "ad-hoc timing", we mean testing the performance effects of an Iris branch against downstream developments: we do an "ad-hoc" timing run of the downstream development against both Iris master and the branch, and then we show a diff of those to runs.

To support this, the downstream project should already support nightly builds. Furthermore, it needs to have an "ad-hoc timing job" that can be triggered like the nightly builds:


trigger-iris.timing:
  <<: *template
  variables:
    OPAM_PINS: "coq version 8.13.2   git+https://gitlab.mpi-sws.org/$IRIS_REPO#$IRIS_REV"
  tags:
  - fp-timing
  only:
  - triggers
  - schedules
  - api
  except:
    variables:
    - $TIMING_AD_HOC_ID == null

Furthermore, the nightly build needs to be disabled for these ad-hoc jobs by changing the only clause of trigger-iris.dev to

  only:
    refs:
    - triggers
    - schedules
    - api
    variables:
    - $TIMING_AD_HOC_ID == null

Basically, this means that when a pipeline is triggered without the TIMING_AD_HOC_ID variable being set, it continues to treat this like a nightly build. When the pipeline variable TIMING_AD_HOC_ID is set, it instead does an ad-hoc timing run. The buildjob script in this repository will take care of submitting this timing information to the coq-speed.mpi-sws.org server with the right metadata.

The best way to trigger these ad-hoc timing builds is via the iris-bot script in the Iris repository.

Opam publishing

Those of our projects that have reverse dependencies get automatically published on our opam repo. Set the OPAM_PKG variable to the name of the package to make that happen. This requires a per-project secret, see the private opam-updater documentation for more details. Be careful to only set OPAM_PKG in one job, or your builds will fail because the same commit gets published several times.

coqdoc publishing

We can automatically deploy the generated coqdoc for a project to the web. To make this happens, set DOC_DIR to an rsync path where the docs should be pushed to; usually that's something like "coqdoc@center.mpi-sws.org:projectname". You also need to add the secret part of an SSH key that can access this location in the DOC_KEY variable (that should be a protected variable because it is only needed by the master branch). Preferably, use a fresh key for each project. For coqdoc@center.mpi-sws.org, these keys should be set up as follows in ~/.ssh/authorized_keys:

command="$HOME/rrsync /www/sws-websites/plv.mpi-sws.org/coqdoc/",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty

This restricts the key to only be able to run rsync for this particular directory.

Backend

Some information about the servers that make this all work can be found here.