Act runner is a runner for Gitea based on Gitea fork of act.
Docker Engine Community version is required for docker mode. To install Docker CE, follow the official install instructions.
Visit here and download the right version for your platform.
make build
make docker
Actions are disabled by default, so you need to add the following to the configuration file of your Gitea instance to enable it:
[actions]
ENABLED=true
./act_runner register
And you will be asked to input:
http://192.168.8.8:3000/. You should use your gitea instance ROOT_URL as the instance argument
and you should not use localhost or 127.0.0.1 as instance IP;http://192.168.8.8:3000/admin/actions/runners;The process looks like:
INFO Registering runner, arch=amd64, os=darwin, version=0.1.5. WARN Runner in user-mode. INFO Enter the Gitea instance URL (for example, https://gitea.com/): http://192.168.8.8:3000/ INFO Enter the runner token: fe884e8027dc292970d4e0303fe82b14xxxxxxxx INFO Enter the runner name (if set empty, use hostname: Test.local): INFO Enter the runner labels, leave blank to use the default labels (comma-separated, for example, ubuntu-latest:docker://docker.gitea.com/runner-images:ubuntu-latest): INFO Registering runner, name=Test.local, instance=http://192.168.8.8:3000/, labels=[ubuntu-latest:docker://docker.gitea.com/runner-images:ubuntu-latest ubuntu-22.04:docker://docker.gitea.com/runner-images:ubuntu-22.04 ubuntu-20.04:docker://docker.gitea.com/runner-images:ubuntu-20.04]. DEBU Successfully pinged the Gitea instance server INFO Runner registered successfully.
You can also register with command line arguments.
./act_runner register --instance http://192.168.8.8:3000 --token <my_runner_token> --no-interactive
If the registry succeed, it will run immediately. Next time, you could run the runner directly.
./act_runner daemon
docker run -e GITEA_INSTANCE_URL=https://your_gitea.com -e GITEA_RUNNER_REGISTRATION_TOKEN=<your_token> -v /var/run/docker.sock:/var/run/docker.sock --name my_runner gitea/act_runner:nightly
You can also configure the runner with a configuration file.
The configuration file is a YAML file, you can generate a sample configuration file with ./act_runner generate-config.
./act_runner generate-config > config.yaml
You can specify the configuration file path with -c/--config argument.
./act_runner -c config.yaml register # register with config file
./act_runner -c config.yaml daemon # run with config file
You can read the latest version of the configuration file online at config.example.yaml.
Check out the examples directory for sample deployment types.
act_runner now includes a product-level CNB compatibility path for local exec and daemon payload negotiation. The CNB path is still evolving, but the current implementation is already usable for the core execution flow.
Use --workflow-dialect cnb to execute a CNB repository locally:
./act_runner exec \
--workflow-dialect cnb \
-C /path/to/repo \
-W .cnb.yml \
--event push
Use --image -self-hosted to run CNB jobs directly on the host instead of creating a job container:
./act_runner exec \
--workflow-dialect cnb \
-C /path/to/repo \
-W .cnb.yml \
--event push \
--image -self-hosted
Use --new-branch and --changed-file to exercise CNB conditional execution locally without exporting environment variables:
./act_runner exec \
--workflow-dialect cnb \
-C /path/to/repo \
-W .cnb.yml \
--event push \
--new-branch \
--changed-file internal/app/main.go
Daemon mode keeps CNB task payload compatibility behind runner.features.cnb_compat, which defaults to false:
runner:
features:
cnb_compat: true
Keep the flag disabled until the generated compatibility verification report shows goNoGo=true.
.cnb.yml discovery and CNB dialect detection.cnb/tag_deploy.yml and .cnb/web_trigger.yml sidecar loadingscript stages (String and Array<String>)commands stages (String and Array<String>, with higher priority than script)ifNewBranchifNewBranchifModify (best-effort)if shell guardsenvimports, including remote http/https sourcesinclude and !reference for CNB config reusecnb:await / cnb:resolvecnb:resolve.options.data to job outputsexportsservices: [docker] via the existing Docker daemon socket mount-self-hosted host-mode executionimports, settingsFrom, and optionsFromCNB_BRANCH, CNB_EVENT, CNB_PIPELINE_KEY, CNB_PIPELINE_NAME, CNB_COMMIT_SHORTimage is not bridged yetifModify depends on available change statistics; when the runner cannot derive changed files from local git history, daemon payload vars, or task event payloads, the check is intentionally ignored to match CNB's documented fallback behaviorimports paths are still read from the repository workspace; the upstream CNB behavior of rewriting local paths to remote blob URLs is not bridged yetallow_slugs, allow_events, allow_branches, and allow_images, but the full CNB control-plane model for public repos, same-source repos, actor permissions, and untrusted-event requirement checks is still not replicated inside the runnerFor a fuller audit against the CNB grammar reference, see docs/cnb-compat/support-matrix.md.
The current release posture is GO-Guarded Rollout / NO-GO Default Enable. The runner can be merged and shipped with runner.features.cnb_compat=false by default, but default enablement still needs a later wave after more CNB parity work. See docs/cnb-compat/release-gate.md.
For YAML authoring, add the CNB schemas to VS Code or any editor that supports yaml.schemas:
{
"yaml.schemas": {
"https://docs.cnb.cool/conf-schema-zh.json": ".cnb.yml",
"https://docs.cnb.cool/tag-deploy-schema-zh.json": "tag_deploy.yml",
"https://docs.cnb.cool/web-trigger-schema-zh.json": "web_trigger.yml"
}
}
If you keep the sidecars inside .cnb/, map the same schemas to .cnb/tag_deploy.yml and .cnb/web_trigger.yml in your editor settings.
For local exec, repository and commit-related CNB built-in env values are derived from the local git checkout on a best-effort basis. --new-branch and --changed-file are the preferred way to exercise ifNewBranch or ifModify; CNB_IS_NEW_BRANCH and CNB_CHANGED_FILES remain available as environment-variable fallbacks.
Remote imports can send an Authorization header. The runner checks CNB_IMPORTS_AUTHORIZATION_<HOST>_<PORT>, CNB_IMPORTS_AUTHORIZATION_<HOST>, CNB_IMPORTS_AUTHORIZATION, then Authorization, and each value supports $VAR / ${VAR} expansion.
include, imports, settingsFrom, and optionsFrom now share the same file-reference resolver. Local files, inline daemon bundle files, and remote http/https sources all flow through the same variable expansion and declared allow_* checks. !reference is resolved after all include files are merged.
Check out examples/cnb for a minimal CNB-compatible layout, including:
.cnb.yml.cnb/tag_deploy.yml.cnb/web_trigger.ymlUse the dedicated verification suite to generate the Wave 4 evidence bundle:
make verify-cnb-compat
The suite writes:
evidence/act-runner-cnb-compat-engine/verification-report.jsonevidence/act-runner-cnb-compat-engine/verification-report.mdevidence/act-runner-cnb-compat-engine/legacy-regression-summary.mdThese files are generated local artifacts and are intentionally not committed. The report is schema-validated against docs/cnb-compat/verification-report.schema.json before it is written, and now also carries releaseRecommendation, guardedRolloutDecision, and defaultEnableDecision. Use that release gate together with docs/cnb-compat/release-gate.md before rollout planning, then make the final enablement decision through your own staged rollout and Hypercare process.