2020-02-06 22:17:58 -08:00
package runner
import (
2022-12-09 12:16:15 +01:00
"archive/tar"
"bufio"
2023-03-12 14:26:24 +01:00
"bytes"
2020-02-06 22:17:58 -08:00
"context"
2023-02-03 11:54:19 -08:00
"crypto/sha256"
2023-03-12 14:26:24 +01:00
_ "embed"
2020-02-14 00:41:20 -08:00
"encoding/json"
2022-11-16 22:29:45 +01:00
"errors"
2020-02-06 22:17:58 -08:00
"fmt"
2022-12-09 12:16:15 +01:00
"io"
2025-08-15 04:54:13 +00:00
"maps"
2020-02-06 22:17:58 -08:00
"os"
2025-07-12 07:53:43 +02:00
"path"
2020-02-23 15:01:25 -08:00
"path/filepath"
2020-02-06 22:17:58 -08:00
"regexp"
2020-02-24 10:56:49 -08:00
"runtime"
2020-02-06 22:17:58 -08:00
"strings"
2023-03-12 14:26:24 +01:00
"text/template"
2025-07-12 07:53:43 +02:00
"time"
2021-11-26 00:18:31 -05:00
2025-09-05 07:29:38 +00:00
"code.forgejo.org/forgejo/runner/v11/act/common"
"code.forgejo.org/forgejo/runner/v11/act/container"
"code.forgejo.org/forgejo/runner/v11/act/exprparser"
"code.forgejo.org/forgejo/runner/v11/act/model"
2025-06-11 14:57:23 +00:00
"github.com/docker/docker/api/types/network"
Add support for service containers (#1949)
* Support services (#42)
Removed createSimpleContainerName and AutoRemove flag
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/42
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services options (#45)
Reviewed-on: https://gitea.com/gitea/act/pulls/45
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support intepolation for `env` of `services` (#47)
Reviewed-on: https://gitea.com/gitea/act/pulls/47
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services `credentials` (#51)
If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials).
Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code: https://gitea.com/gitea/act/src/commit/ba7ef95f06fe237175a1495979479eb185260135/pkg/runner/run_context.go#L228-L269
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/51
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Add ContainerMaxLifetime and ContainerNetworkMode options
from: https://gitea.com/gitea/act/commit/1d92791718bcb426f2a95f90dc1baf0045b89eb2
* Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
- If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
- Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
- On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat.
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
* Check volumes (#60)
This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config.
Options related to volumes:
- [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes)
- [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes)
In addition, volumes specified by `options` will also be checked.
Currently, the following default volumes (see https://gitea.com/gitea/act/src/commit/f78a6206d3204d14f6b1ad97c1771385236d52e3/pkg/runner/run_context.go#L116-L166) will be added to `ValidVolumes`:
- `act-toolcache`
- `<container-name>` and `<container-name>-env`
- `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted)
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/60
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Remove ContainerMaxLifetime; fix lint
* Remove unused ValidVolumes
* Remove ConnectToNetwork
* Add docker stubs
* Close docker clients to prevent file descriptor leaks
* Fix the error when removing network in self-hosted mode (#69)
Fixes https://gitea.com/gitea/act_runner/issues/255
Reviewed-on: https://gitea.com/gitea/act/pulls/69
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Move service container and network cleanup to rc.cleanUpJobContainer
* Add --network flag; default to host if not using service containers or set explicitly
* Correctly close executor to prevent fd leak
* Revert to tail instead of full path
* fix network duplication
* backport networkingConfig for aliaes
* don't hardcode netMode host
* Convert services test to table driven tests
* Add failing tests for services
* Expose service container ports onto the host
* Set container network mode in artifacts server test to host mode
* Log container network mode when creating/starting a container
* fix: Correctly handle ContainerNetworkMode
* fix: missing service container network
* Always remove service containers
Although we usually keep containers running if the workflow errored
(unless `--rm` is given) in order to facilitate debugging and we have
a flag (`--reuse`) to always keep containers running in order to speed
up repeated `act` invocations, I believe that these should only apply
to job containers and not service containers, because changing the
network settings on a service container requires re-creating it anyway.
* Remove networks only if no active endpoints exist
* Ensure job containers are stopped before starting a new job
* fix: go build -tags WITHOUT_DOCKER
---------
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
2023-10-19 02:24:52 -07:00
"github.com/docker/go-connections/nat"
"github.com/opencontainers/selinux/go-selinux"
2020-02-06 22:17:58 -08:00
)
// RunContext contains info about current job
type RunContext struct {
2022-11-16 22:29:45 +01:00
Name string
Config * Config
2025-08-15 04:54:13 +00:00
Matrix map [ string ] any
2022-11-16 22:29:45 +01:00
Run * model . Run
EventJSON string
Env map [ string ] string
2023-02-04 14:35:13 +01:00
GlobalEnv map [ string ] string // to pass env changes of GITHUB_ENV and set-env correctly, due to dirty Env field
2022-11-16 22:29:45 +01:00
ExtraPath [ ] string
CurrentStep string
StepResults map [ string ] * model . StepResult
2022-12-15 18:08:31 +01:00
IntraActionState map [ string ] map [ string ] string
2022-11-16 22:29:45 +01:00
ExprEval ExpressionEvaluator
JobContainer container . ExecutionsEnvironment
2023-04-19 11:23:28 +08:00
ServiceContainers [ ] container . ExecutionsEnvironment
2022-11-16 22:29:45 +01:00
OutputMappings map [ MappableOutput ] MappableOutput
JobName string
ActionPath string
Parent * RunContext
Masks [ ] string
cleanUpJobContainer common . Executor
2022-12-15 17:45:22 +01:00
caller * caller // job calling this RunContext (reusable workflows)
2025-08-14 10:46:28 +02:00
randomName string
2025-08-14 10:21:08 +02:00
networkName string
networkCreated bool
2022-03-02 09:29:34 +01:00
}
func ( rc * RunContext ) AddMask ( mask string ) {
rc . Masks = append ( rc . Masks , mask )
2021-12-22 20:19:50 +01:00
}
2021-04-02 16:40:44 -04:00
type MappableOutput struct {
StepID string
OutputName string
2020-02-14 00:41:20 -08:00
}
2020-02-26 23:29:43 -08:00
func ( rc * RunContext ) String ( ) string {
2022-12-15 17:45:22 +01:00
name := fmt . Sprintf ( "%s/%s" , rc . Run . Workflow . Name , rc . Name )
if rc . caller != nil {
// prefix the reusable workflow with the caller job
// this is required to create unique container names
2023-11-12 11:21:41 -06:00
name = fmt . Sprintf ( "%s/%s" , rc . caller . runContext . Name , name )
2022-12-15 17:45:22 +01:00
}
return name
2020-02-26 23:29:43 -08:00
}
2020-02-06 22:17:58 -08:00
// GetEnv returns the env for the context
func ( rc * RunContext ) GetEnv ( ) map [ string ] string {
if rc . Env == nil {
2022-11-16 22:29:45 +01:00
rc . Env = map [ string ] string { }
if rc . Run != nil && rc . Run . Workflow != nil && rc . Config != nil {
job := rc . Run . Job ( )
if job != nil {
rc . Env = mergeMaps ( rc . Run . Workflow . Env , job . Environment ( ) , rc . Config . Env )
}
}
2020-02-06 22:17:58 -08:00
}
2020-11-18 16:14:34 +01:00
rc . Env [ "ACT" ] = "true"
2023-06-13 03:46:26 +00:00
if ! rc . Config . NoSkipCheckout {
rc . Env [ "ACT_SKIP_CHECKOUT" ] = "true"
}
2020-02-06 22:17:58 -08:00
return rc . Env
}
2020-02-23 15:01:25 -08:00
func ( rc * RunContext ) jobContainerName ( ) string {
2025-08-13 18:48:38 +02:00
return createSimpleContainerName ( rc . Config . ContainerNamePrefix , "WORKFLOW-" + common . Sha256 ( rc . String ( ) ) , "JOB-" + rc . Name )
2020-02-06 22:17:58 -08:00
}
2023-04-25 18:31:17 +02:00
func getDockerDaemonSocketMountPath ( daemonPath string ) string {
if protoIndex := strings . Index ( daemonPath , "://" ) ; protoIndex != - 1 {
scheme := daemonPath [ : protoIndex ]
if strings . EqualFold ( scheme , "npipe" ) {
// linux container mount on windows, use the default socket path of the VM / wsl2
return "/var/run/docker.sock"
} else if strings . EqualFold ( scheme , "unix" ) {
return daemonPath [ protoIndex + 3 : ]
} else if strings . IndexFunc ( scheme , func ( r rune ) bool {
return ( r < 'a' || r > 'z' ) && ( r < 'A' || r > 'Z' )
} ) == - 1 {
// unknown protocol use default
return "/var/run/docker.sock"
}
}
return daemonPath
}
2025-08-14 10:30:46 +02:00
func ( rc * RunContext ) getInternalVolumeNames ( ctx context . Context ) [ ] string {
return [ ] string {
rc . getInternalVolumeWorkdir ( ctx ) ,
rc . getInternalVolumeEnv ( ctx ) ,
}
}
func ( rc * RunContext ) getInternalVolumeWorkdir ( ctx context . Context ) string {
2025-08-14 11:21:51 +02:00
rc . ensureRandomName ( ctx )
return rc . randomName
2025-08-14 10:30:46 +02:00
}
func ( rc * RunContext ) getInternalVolumeEnv ( ctx context . Context ) string {
2025-08-14 11:21:51 +02:00
rc . ensureRandomName ( ctx )
return fmt . Sprintf ( "%s-env" , rc . randomName )
2025-08-14 10:30:46 +02:00
}
2021-05-04 14:50:35 -07:00
// Returns the binds and mounts for the container, resolving paths as appopriate
2025-08-14 10:08:50 +02:00
func ( rc * RunContext ) GetBindsAndMounts ( ctx context . Context ) ( [ ] string , map [ string ] string , [ ] string ) {
2023-04-25 18:31:17 +02:00
binds := [ ] string { }
2025-08-15 09:11:15 +00:00
containerDaemonSocket := rc . Config . GetContainerDaemonSocket ( )
if containerDaemonSocket != "-" {
daemonPath := getDockerDaemonSocketMountPath ( containerDaemonSocket )
2023-04-25 18:31:17 +02:00
binds = append ( binds , fmt . Sprintf ( "%s:%s" , daemonPath , "/var/run/docker.sock" ) )
2021-05-04 14:50:35 -07:00
}
2022-11-16 22:29:45 +01:00
ext := container . LinuxContainerEnvironmentExtensions { }
2021-05-04 14:50:35 -07:00
mounts := map [ string ] string {
2025-08-14 10:30:46 +02:00
rc . getInternalVolumeEnv ( ctx ) : ext . GetActPath ( ) ,
2021-05-04 14:50:35 -07:00
}
2022-04-05 03:01:13 +09:00
if job := rc . Run . Job ( ) ; job != nil {
if container := job . Container ( ) ; container != nil {
for _ , v := range container . Volumes {
if ! strings . Contains ( v , ":" ) || filepath . IsAbs ( v ) {
// Bind anonymous volume or host file.
binds = append ( binds , v )
} else {
// Mount existing volume.
paths := strings . SplitN ( v , ":" , 2 )
mounts [ paths [ 0 ] ] = paths [ 1 ]
}
}
}
}
2021-05-04 14:50:35 -07:00
if rc . Config . BindWorkdir {
bindModifiers := ""
if runtime . GOOS == "darwin" {
bindModifiers = ":delegated"
}
2021-11-26 00:18:31 -05:00
if selinux . GetEnabled ( ) {
bindModifiers = ":z"
}
2022-11-16 22:29:45 +01:00
binds = append ( binds , fmt . Sprintf ( "%s:%s%s" , rc . Config . Workdir , ext . ToContainerPath ( rc . Config . Workdir ) , bindModifiers ) )
2021-05-04 14:50:35 -07:00
} else {
2025-08-14 10:30:46 +02:00
mounts [ rc . getInternalVolumeWorkdir ( ctx ) ] = ext . ToContainerPath ( rc . Config . Workdir )
2021-05-04 14:50:35 -07:00
}
2025-08-15 09:11:15 +00:00
validVolumes := append ( rc . getInternalVolumeNames ( ctx ) , getDockerDaemonSocketMountPath ( containerDaemonSocket ) )
2025-08-12 10:09:42 +00:00
validVolumes = append ( validVolumes , rc . Config . ValidVolumes ... )
return binds , mounts , validVolumes
2021-05-04 14:50:35 -07:00
}
2023-03-12 14:26:24 +01:00
//go:embed lxc-helpers-lib.sh
var lxcHelpersLib string
//go:embed lxc-helpers.sh
var lxcHelpers string
var startTemplate = template . Must ( template . New ( "start" ) . Parse ( ` # ! / bin / bash - e
2023-11-10 21:45:14 +01:00
LXC_CONTAINER_CONFIG = "{{.Config}}"
2023-03-12 14:26:24 +01:00
LXC_CONTAINER_RELEASE = "{{.Release}}"
2023-11-10 21:45:14 +01:00
source $ ( dirname $ 0 ) / lxc - helpers - lib . sh
2023-03-12 14:26:24 +01:00
function template_act ( ) {
echo $ ( lxc_template_release ) - act
}
function install_nodejs ( ) {
local name = "$1"
local script = / usr / local / bin / lxc - helpers - install - node . sh
cat > $ ( lxc_root $ name ) / $ script << ' EOF '
# ! / bin / sh - e
# https : //github.com/nodesource/distributions#debinstall
export DEBIAN_FRONTEND = noninteractive
apt - get install - qq - y ca - certificates curl gnupg git
mkdir - p / etc / apt / keyrings
curl - fsSL https : //deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg
NODE_MAJOR = 20
echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" | tee / etc / apt / sources . list . d / nodesource . list
apt - get update - qq
apt - get install - qq - y nodejs
EOF
lxc_container_run_script $ name $ script
}
function build_template_act ( ) {
local name = "$(template_act)"
2025-01-28 23:41:13 +01:00
if lxc_exists_and_apt_not_old $ name ; then
return 0
2023-03-12 14:26:24 +01:00
fi
lxc_build_template $ ( lxc_template_release ) $ name
lxc_container_start $ name
install_nodejs $ name
lxc_container_stop $ name
}
lxc_prepare_environment
2023-11-10 21:45:14 +01:00
LXC_CONTAINER_CONFIG = "" build_template_act
2023-03-12 14:26:24 +01:00
lxc_build_template $ ( template_act ) "{{.Name}}"
lxc_container_mount "{{.Name}}" "{{ .Root }}"
lxc_container_start "{{.Name}}"
` ) )
var stopTemplate = template . Must ( template . New ( "stop" ) . Parse ( ` # ! / bin / bash
source $ ( dirname $ 0 ) / lxc - helpers - lib . sh
lxc_container_destroy "{{.Name}}"
2025-10-03 16:14:08 +00:00
lxc_maybe_sudo
$ LXC_SUDO rm - fr "{{ .Root }}"
2023-03-12 14:26:24 +01:00
` ) )
2023-11-09 03:36:09 +01:00
func ( rc * RunContext ) stopHostEnvironment ( ctx context . Context ) error {
logger := common . Logger ( ctx )
logger . Debugf ( "stopHostEnvironment" )
2025-09-28 14:49:45 +00:00
if ! rc . IsLXCHostEnv ( ctx ) {
return nil
}
2023-11-09 03:36:09 +01:00
var stopScript bytes . Buffer
if err := stopTemplate . Execute ( & stopScript , struct {
Name string
Root string
} {
Name : rc . JobContainer . GetName ( ) ,
Root : rc . JobContainer . GetRoot ( ) ,
} ) ; err != nil {
return err
2023-03-12 14:26:24 +01:00
}
2023-11-09 03:36:09 +01:00
return common . NewPipelineExecutor (
rc . JobContainer . Copy ( rc . JobContainer . GetActPath ( ) + "/" , & container . FileEntry {
Name : "workflow/stop-lxc.sh" ,
2025-07-28 12:26:41 +00:00
Mode : 0 o755 ,
2023-11-09 03:36:09 +01:00
Body : stopScript . String ( ) ,
} ) ,
rc . JobContainer . Exec ( [ ] string { rc . JobContainer . GetActPath ( ) + "/workflow/stop-lxc.sh" } , map [ string ] string { } , "root" , "/tmp" ) ,
) ( ctx )
2023-03-12 14:26:24 +01:00
}
2022-11-16 22:29:45 +01:00
func ( rc * RunContext ) startHostEnvironment ( ) common . Executor {
return func ( ctx context . Context ) error {
logger := common . Logger ( ctx )
rawLogger := logger . WithField ( "raw_output" , true )
logWriter := common . NewLineWriter ( rc . commandHandler ( ctx ) , func ( s string ) bool {
if rc . Config . LogOutput {
rawLogger . Infof ( "%s" , s )
} else {
rawLogger . Debugf ( "%s" , s )
}
return true
} )
cacheDir := rc . ActionCacheDir ( )
2025-08-14 15:46:01 +00:00
randName := common . MustRandName ( 8 )
2023-03-12 14:26:24 +01:00
miscpath := filepath . Join ( cacheDir , randName )
2022-11-16 22:29:45 +01:00
actPath := filepath . Join ( miscpath , "act" )
2023-02-16 23:34:51 +08:00
if err := os . MkdirAll ( actPath , 0 o777 ) ; err != nil {
2022-11-16 22:29:45 +01:00
return err
}
path := filepath . Join ( miscpath , "hostexecutor" )
2023-02-16 23:34:51 +08:00
if err := os . MkdirAll ( path , 0 o777 ) ; err != nil {
2022-11-16 22:29:45 +01:00
return err
}
runnerTmp := filepath . Join ( miscpath , "tmp" )
2023-02-16 23:34:51 +08:00
if err := os . MkdirAll ( runnerTmp , 0 o777 ) ; err != nil {
2022-11-16 22:29:45 +01:00
return err
}
rc . JobContainer = & container . HostEnvironment {
2023-03-12 14:26:24 +01:00
Name : randName ,
Root : miscpath ,
2022-11-16 22:29:45 +01:00
Path : path ,
TmpDir : runnerTmp ,
2025-07-13 21:55:02 +00:00
ToolCache : rc . getToolCache ( ctx ) ,
2022-11-16 22:29:45 +01:00
Workdir : rc . Config . Workdir ,
ActPath : actPath ,
2025-10-03 16:14:08 +00:00
StdOut : logWriter ,
LXC : rc . IsLXCHostEnv ( ctx ) ,
2022-11-16 22:29:45 +01:00
}
2025-09-16 19:42:04 +00:00
rc . cleanUpJobContainer = func ( ctx context . Context ) error {
if err := rc . stopHostEnvironment ( ctx ) ; err != nil {
return err
}
if rc . JobContainer == nil {
return nil
}
return rc . JobContainer . Remove ( ) ( ctx )
}
2022-12-19 15:58:55 +01:00
for k , v := range rc . JobContainer . GetRunnerContext ( ctx ) {
if v , ok := v . ( string ) ; ok {
rc . Env [ fmt . Sprintf ( "RUNNER_%s" , strings . ToUpper ( k ) ) ] = v
}
}
2022-11-16 22:29:45 +01:00
for _ , env := range os . Environ ( ) {
2023-02-16 23:16:46 +08:00
if k , v , ok := strings . Cut ( env , "=" ) ; ok {
// don't override
if _ , ok := rc . Env [ k ] ; ! ok {
rc . Env [ k ] = v
}
2022-11-16 22:29:45 +01:00
}
}
2023-11-09 03:36:09 +01:00
executors := make ( [ ] common . Executor , 0 , 10 )
2023-11-10 21:45:14 +01:00
isLXCHost , LXCTemplate , LXCRelease , LXCConfig := rc . GetLXCInfo ( ctx )
if isLXCHost {
var startScript bytes . Buffer
if err := startTemplate . Execute ( & startScript , struct {
Name string
Template string
Release string
Config string
Repo string
Root string
TmpDir string
Script string
} {
Name : rc . JobContainer . GetName ( ) ,
Template : LXCTemplate ,
Release : LXCRelease ,
Config : LXCConfig ,
Repo : "" , // step.Environment["CI_REPO"],
Root : rc . JobContainer . GetRoot ( ) ,
TmpDir : runnerTmp ,
Script : "" , // "commands-" + step.Name,
} ) ; err != nil {
return err
}
2023-11-09 03:36:09 +01:00
executors = append ( executors ,
rc . JobContainer . Copy ( rc . JobContainer . GetActPath ( ) + "/" , & container . FileEntry {
Name : "workflow/lxc-helpers-lib.sh" ,
2025-07-28 12:26:41 +00:00
Mode : 0 o755 ,
2023-11-09 03:36:09 +01:00
Body : lxcHelpersLib ,
} ) ,
rc . JobContainer . Copy ( rc . JobContainer . GetActPath ( ) + "/" , & container . FileEntry {
Name : "workflow/lxc-helpers.sh" ,
2025-07-28 12:26:41 +00:00
Mode : 0 o755 ,
2023-11-09 03:36:09 +01:00
Body : lxcHelpers ,
} ) ,
rc . JobContainer . Copy ( rc . JobContainer . GetActPath ( ) + "/" , & container . FileEntry {
Name : "workflow/start-lxc.sh" ,
2025-07-28 12:26:41 +00:00
Mode : 0 o755 ,
2023-11-09 03:36:09 +01:00
Body : startScript . String ( ) ,
} ) ,
rc . JobContainer . Exec ( [ ] string { rc . JobContainer . GetActPath ( ) + "/workflow/start-lxc.sh" } , map [ string ] string { } , "root" , "/tmp" ) ,
)
}
executors = append ( executors , rc . JobContainer . Copy ( rc . JobContainer . GetActPath ( ) + "/" , & container . FileEntry {
Name : "workflow/event.json" ,
Mode : 0 o644 ,
Body : rc . EventJSON ,
} , & container . FileEntry {
Name : "workflow/envs.txt" ,
Mode : 0 o666 ,
Body : "" ,
} ) )
return common . NewPipelineExecutor ( executors ... ) ( ctx )
2022-11-16 22:29:45 +01:00
}
}
2025-08-14 10:46:28 +02:00
func ( rc * RunContext ) ensureRandomName ( ctx context . Context ) {
if rc . randomName == "" {
logger := common . Logger ( ctx )
if rc . Parent != nil {
// composite actions inherit their run context from the parent job
rootRunContext := rc
for rootRunContext . Parent != nil {
rootRunContext = rootRunContext . Parent
}
rootRunContext . ensureRandomName ( ctx )
rc . randomName = rootRunContext . randomName
logger . Debugf ( "RunContext inherited random name %s from its parent" , rc . Name , rc . randomName )
} else {
rc . randomName = common . MustRandName ( 16 )
logger . Debugf ( "RunContext %s is assigned random name %s" , rc . Name , rc . randomName )
}
}
}
2025-08-14 10:21:08 +02:00
func ( rc * RunContext ) getNetworkCreated ( ctx context . Context ) bool {
rc . ensureNetworkName ( ctx )
return rc . networkCreated
}
func ( rc * RunContext ) getNetworkName ( ctx context . Context ) string {
rc . ensureNetworkName ( ctx )
return rc . networkName
}
func ( rc * RunContext ) ensureNetworkName ( ctx context . Context ) {
if rc . networkName == "" {
2025-08-14 11:21:51 +02:00
rc . ensureRandomName ( ctx )
2025-08-14 10:21:08 +02:00
rc . networkName = string ( rc . Config . ContainerNetworkMode )
if len ( rc . Run . Job ( ) . Services ) > 0 || rc . networkName == "" {
2025-08-14 11:21:51 +02:00
rc . networkName = fmt . Sprintf ( "WORKFLOW-%s" , rc . randomName )
2025-08-14 10:21:08 +02:00
rc . networkCreated = true
}
2025-07-15 20:55:38 +00:00
}
}
2020-02-23 15:01:25 -08:00
2025-07-26 11:31:05 +00:00
var sanitizeNetworkAliasRegex = regexp . MustCompile ( "[^a-z0-9-]" )
2025-07-22 07:54:10 +00:00
func sanitizeNetworkAlias ( ctx context . Context , original string ) string {
2025-07-26 11:31:05 +00:00
sanitized := sanitizeNetworkAliasRegex . ReplaceAllString ( strings . ToLower ( original ) , "_" )
2025-07-22 07:54:10 +00:00
if sanitized != original {
logger := common . Logger ( ctx )
2025-07-26 11:31:05 +00:00
logger . Infof ( "The network alias is %s (sanitized version of %s)" , sanitized , original )
2025-07-22 07:54:10 +00:00
}
return sanitized
}
2025-07-15 20:55:38 +00:00
func ( rc * RunContext ) prepareJobContainer ( ctx context . Context ) error {
logger := common . Logger ( ctx )
image := rc . platformImage ( ctx )
rawLogger := logger . WithField ( "raw_output" , true )
logWriter := common . NewLineWriter ( rc . commandHandler ( ctx ) , func ( s string ) bool {
if rc . Config . LogOutput {
rawLogger . Infof ( "%s" , s )
} else {
rawLogger . Debugf ( "%s" , s )
2021-11-27 19:05:56 +01:00
}
2025-07-15 20:55:38 +00:00
return true
} )
2021-11-27 19:05:56 +01:00
2025-07-15 20:55:38 +00:00
username , password , err := rc . handleCredentials ( ctx )
if err != nil {
return fmt . Errorf ( "failed to handle credentials: %s" , err )
}
Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177
#### changes:
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
- If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
- Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
- On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat.
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
2023-05-16 14:03:55 +08:00
2025-07-15 20:55:38 +00:00
logger . Infof ( "\U0001f680 Start image=%s" , image )
name := rc . jobContainerName ( )
// For gitea, to support --volumes-from <container_name_or_id> in options.
// We need to set the container name to the environment variable.
rc . Env [ "JOB_CONTAINER_NAME" ] = name
2024-02-16 23:28:10 -05:00
2025-07-15 20:55:38 +00:00
envList := make ( [ ] string , 0 )
Add support for service containers (#1949)
* Support services (#42)
Removed createSimpleContainerName and AutoRemove flag
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/42
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services options (#45)
Reviewed-on: https://gitea.com/gitea/act/pulls/45
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support intepolation for `env` of `services` (#47)
Reviewed-on: https://gitea.com/gitea/act/pulls/47
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services `credentials` (#51)
If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials).
Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code: https://gitea.com/gitea/act/src/commit/ba7ef95f06fe237175a1495979479eb185260135/pkg/runner/run_context.go#L228-L269
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/51
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Add ContainerMaxLifetime and ContainerNetworkMode options
from: https://gitea.com/gitea/act/commit/1d92791718bcb426f2a95f90dc1baf0045b89eb2
* Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
- If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
- Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
- On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat.
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
* Check volumes (#60)
This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config.
Options related to volumes:
- [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes)
- [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes)
In addition, volumes specified by `options` will also be checked.
Currently, the following default volumes (see https://gitea.com/gitea/act/src/commit/f78a6206d3204d14f6b1ad97c1771385236d52e3/pkg/runner/run_context.go#L116-L166) will be added to `ValidVolumes`:
- `act-toolcache`
- `<container-name>` and `<container-name>-env`
- `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted)
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/60
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Remove ContainerMaxLifetime; fix lint
* Remove unused ValidVolumes
* Remove ConnectToNetwork
* Add docker stubs
* Close docker clients to prevent file descriptor leaks
* Fix the error when removing network in self-hosted mode (#69)
Fixes https://gitea.com/gitea/act_runner/issues/255
Reviewed-on: https://gitea.com/gitea/act/pulls/69
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Move service container and network cleanup to rc.cleanUpJobContainer
* Add --network flag; default to host if not using service containers or set explicitly
* Correctly close executor to prevent fd leak
* Revert to tail instead of full path
* fix network duplication
* backport networkingConfig for aliaes
* don't hardcode netMode host
* Convert services test to table driven tests
* Add failing tests for services
* Expose service container ports onto the host
* Set container network mode in artifacts server test to host mode
* Log container network mode when creating/starting a container
* fix: Correctly handle ContainerNetworkMode
* fix: missing service container network
* Always remove service containers
Although we usually keep containers running if the workflow errored
(unless `--rm` is given) in order to facilitate debugging and we have
a flag (`--reuse`) to always keep containers running in order to speed
up repeated `act` invocations, I believe that these should only apply
to job containers and not service containers, because changing the
network settings on a service container requires re-creating it anyway.
* Remove networks only if no active endpoints exist
* Ensure job containers are stopped before starting a new job
* fix: go build -tags WITHOUT_DOCKER
---------
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
2023-10-19 02:24:52 -07:00
2025-07-15 20:55:38 +00:00
envList = append ( envList , fmt . Sprintf ( "%s=%s" , "RUNNER_TOOL_CACHE" , rc . getToolCache ( ctx ) ) )
envList = append ( envList , fmt . Sprintf ( "%s=%s" , "RUNNER_OS" , "Linux" ) )
envList = append ( envList , fmt . Sprintf ( "%s=%s" , "RUNNER_ARCH" , container . RunnerArch ( ctx ) ) )
envList = append ( envList , fmt . Sprintf ( "%s=%s" , "RUNNER_TEMP" , "/tmp" ) )
envList = append ( envList , fmt . Sprintf ( "%s=%s" , "LANG" , "C.UTF-8" ) ) // Use same locale as GitHub Actions
2023-11-12 19:30:21 +01:00
2025-07-15 20:55:38 +00:00
ext := container . LinuxContainerEnvironmentExtensions { }
2025-08-14 10:08:50 +02:00
binds , mounts , validVolumes := rc . GetBindsAndMounts ( ctx )
2025-07-15 20:55:38 +00:00
// add service containers
for serviceID , spec := range rc . Run . Job ( ) . Services {
// interpolate env
interpolatedEnvs := make ( map [ string ] string , len ( spec . Env ) )
for k , v := range spec . Env {
interpolatedEnvs [ k ] = rc . ExprEval . Interpolate ( ctx , v )
}
envs := make ( [ ] string , 0 , len ( interpolatedEnvs ) )
for k , v := range interpolatedEnvs {
envs = append ( envs , fmt . Sprintf ( "%s=%s" , k , v ) )
}
interpolatedCmd := make ( [ ] string , 0 , len ( spec . Cmd ) )
for _ , v := range spec . Cmd {
interpolatedCmd = append ( interpolatedCmd , rc . ExprEval . Interpolate ( ctx , v ) )
}
Add support for service containers (#1949)
* Support services (#42)
Removed createSimpleContainerName and AutoRemove flag
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/42
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services options (#45)
Reviewed-on: https://gitea.com/gitea/act/pulls/45
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support intepolation for `env` of `services` (#47)
Reviewed-on: https://gitea.com/gitea/act/pulls/47
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services `credentials` (#51)
If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials).
Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code: https://gitea.com/gitea/act/src/commit/ba7ef95f06fe237175a1495979479eb185260135/pkg/runner/run_context.go#L228-L269
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/51
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Add ContainerMaxLifetime and ContainerNetworkMode options
from: https://gitea.com/gitea/act/commit/1d92791718bcb426f2a95f90dc1baf0045b89eb2
* Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
- If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
- Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
- On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat.
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
* Check volumes (#60)
This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config.
Options related to volumes:
- [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes)
- [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes)
In addition, volumes specified by `options` will also be checked.
Currently, the following default volumes (see https://gitea.com/gitea/act/src/commit/f78a6206d3204d14f6b1ad97c1771385236d52e3/pkg/runner/run_context.go#L116-L166) will be added to `ValidVolumes`:
- `act-toolcache`
- `<container-name>` and `<container-name>-env`
- `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted)
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/60
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Remove ContainerMaxLifetime; fix lint
* Remove unused ValidVolumes
* Remove ConnectToNetwork
* Add docker stubs
* Close docker clients to prevent file descriptor leaks
* Fix the error when removing network in self-hosted mode (#69)
Fixes https://gitea.com/gitea/act_runner/issues/255
Reviewed-on: https://gitea.com/gitea/act/pulls/69
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Move service container and network cleanup to rc.cleanUpJobContainer
* Add --network flag; default to host if not using service containers or set explicitly
* Correctly close executor to prevent fd leak
* Revert to tail instead of full path
* fix network duplication
* backport networkingConfig for aliaes
* don't hardcode netMode host
* Convert services test to table driven tests
* Add failing tests for services
* Expose service container ports onto the host
* Set container network mode in artifacts server test to host mode
* Log container network mode when creating/starting a container
* fix: Correctly handle ContainerNetworkMode
* fix: missing service container network
* Always remove service containers
Although we usually keep containers running if the workflow errored
(unless `--rm` is given) in order to facilitate debugging and we have
a flag (`--reuse`) to always keep containers running in order to speed
up repeated `act` invocations, I believe that these should only apply
to job containers and not service containers, because changing the
network settings on a service container requires re-creating it anyway.
* Remove networks only if no active endpoints exist
* Ensure job containers are stopped before starting a new job
* fix: go build -tags WITHOUT_DOCKER
---------
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
2023-10-19 02:24:52 -07:00
2025-07-15 20:55:38 +00:00
username , password , err := rc . handleServiceCredentials ( ctx , spec . Credentials )
if err != nil {
return fmt . Errorf ( "failed to handle service %s credentials: %w" , serviceID , err )
2023-04-19 11:23:28 +08:00
}
2025-07-15 20:55:38 +00:00
interpolatedVolumes := make ( [ ] string , 0 , len ( spec . Volumes ) )
for _ , volume := range spec . Volumes {
interpolatedVolumes = append ( interpolatedVolumes , rc . ExprEval . Interpolate ( ctx , volume ) )
}
serviceBinds , serviceMounts := rc . GetServiceBindsAndMounts ( interpolatedVolumes )
Add support for service containers (#1949)
* Support services (#42)
Removed createSimpleContainerName and AutoRemove flag
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/42
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services options (#45)
Reviewed-on: https://gitea.com/gitea/act/pulls/45
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support intepolation for `env` of `services` (#47)
Reviewed-on: https://gitea.com/gitea/act/pulls/47
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services `credentials` (#51)
If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials).
Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code: https://gitea.com/gitea/act/src/commit/ba7ef95f06fe237175a1495979479eb185260135/pkg/runner/run_context.go#L228-L269
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/51
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Add ContainerMaxLifetime and ContainerNetworkMode options
from: https://gitea.com/gitea/act/commit/1d92791718bcb426f2a95f90dc1baf0045b89eb2
* Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
- If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
- Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
- On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat.
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
* Check volumes (#60)
This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config.
Options related to volumes:
- [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes)
- [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes)
In addition, volumes specified by `options` will also be checked.
Currently, the following default volumes (see https://gitea.com/gitea/act/src/commit/f78a6206d3204d14f6b1ad97c1771385236d52e3/pkg/runner/run_context.go#L116-L166) will be added to `ValidVolumes`:
- `act-toolcache`
- `<container-name>` and `<container-name>-env`
- `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted)
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/60
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Remove ContainerMaxLifetime; fix lint
* Remove unused ValidVolumes
* Remove ConnectToNetwork
* Add docker stubs
* Close docker clients to prevent file descriptor leaks
* Fix the error when removing network in self-hosted mode (#69)
Fixes https://gitea.com/gitea/act_runner/issues/255
Reviewed-on: https://gitea.com/gitea/act/pulls/69
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Move service container and network cleanup to rc.cleanUpJobContainer
* Add --network flag; default to host if not using service containers or set explicitly
* Correctly close executor to prevent fd leak
* Revert to tail instead of full path
* fix network duplication
* backport networkingConfig for aliaes
* don't hardcode netMode host
* Convert services test to table driven tests
* Add failing tests for services
* Expose service container ports onto the host
* Set container network mode in artifacts server test to host mode
* Log container network mode when creating/starting a container
* fix: Correctly handle ContainerNetworkMode
* fix: missing service container network
* Always remove service containers
Although we usually keep containers running if the workflow errored
(unless `--rm` is given) in order to facilitate debugging and we have
a flag (`--reuse`) to always keep containers running in order to speed
up repeated `act` invocations, I believe that these should only apply
to job containers and not service containers, because changing the
network settings on a service container requires re-creating it anyway.
* Remove networks only if no active endpoints exist
* Ensure job containers are stopped before starting a new job
* fix: go build -tags WITHOUT_DOCKER
---------
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
2023-10-19 02:24:52 -07:00
2025-07-15 20:55:38 +00:00
interpolatedPorts := make ( [ ] string , 0 , len ( spec . Ports ) )
for _ , port := range spec . Ports {
interpolatedPorts = append ( interpolatedPorts , rc . ExprEval . Interpolate ( ctx , port ) )
}
exposedPorts , portBindings , err := nat . ParsePortSpecs ( interpolatedPorts )
if err != nil {
return fmt . Errorf ( "failed to parse service %s ports: %w" , serviceID , err )
2022-11-16 22:29:45 +01:00
}
2025-07-15 20:55:38 +00:00
serviceContainerName := createContainerName ( rc . jobContainerName ( ) , serviceID )
c := container . NewContainer ( & container . NewContainerInput {
Name : serviceContainerName ,
Image : rc . ExprEval . Interpolate ( ctx , spec . Image ) ,
Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177
#### changes:
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
- If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
- Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
- On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat.
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
2023-05-16 14:03:55 +08:00
Username : username ,
Password : password ,
2025-07-15 20:55:38 +00:00
Cmd : interpolatedCmd ,
Env : envs ,
2025-07-13 21:55:02 +00:00
ToolCache : rc . getToolCache ( ctx ) ,
2025-07-15 20:55:38 +00:00
Mounts : serviceMounts ,
Binds : serviceBinds ,
Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177
#### changes:
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
- If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
- Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
- On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat.
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
2023-05-16 14:03:55 +08:00
Stdout : logWriter ,
Stderr : logWriter ,
Privileged : rc . Config . Privileged ,
UsernsMode : rc . Config . UsernsMode ,
Platform : rc . Config . ContainerArchitecture ,
2025-08-14 10:21:08 +02:00
NetworkMode : rc . getNetworkName ( ctx ) ,
2025-07-22 07:54:10 +00:00
NetworkAliases : [ ] string { sanitizeNetworkAlias ( ctx , serviceID ) } ,
2025-07-15 20:55:38 +00:00
ExposedPorts : exposedPorts ,
PortBindings : portBindings ,
2023-06-05 09:21:59 +00:00
ValidVolumes : rc . Config . ValidVolumes ,
2024-12-25 15:21:09 +01:00
2025-07-15 20:55:38 +00:00
JobOptions : rc . ExprEval . Interpolate ( ctx , spec . Options ) ,
2025-01-11 08:45:08 +01:00
ConfigOptions : rc . Config . ContainerOptions ,
2020-02-23 15:01:25 -08:00
} )
2025-07-15 20:55:38 +00:00
rc . ServiceContainers = append ( rc . ServiceContainers , c )
}
rc . cleanUpJobContainer = func ( ctx context . Context ) error {
2025-08-03 07:01:46 +00:00
// reinit logger from ctx since cleanUpJobContainer is be called after the job is complete, and using
// prepareJobContainer's logger could cause logs to continue to append to the finished job
logger := common . Logger ( ctx )
2025-07-15 20:55:38 +00:00
reuseJobContainer := func ( ctx context . Context ) bool {
return rc . Config . ReuseContainers
2022-11-16 22:29:45 +01:00
}
2020-02-23 15:01:25 -08:00
2025-07-15 20:55:38 +00:00
if rc . JobContainer != nil {
return rc . JobContainer . Remove ( ) . IfNot ( reuseJobContainer ) .
2025-08-14 10:30:46 +02:00
Then ( container . NewDockerVolumesRemoveExecutor ( rc . getInternalVolumeNames ( ctx ) ) ) . IfNot ( reuseJobContainer ) .
2025-07-15 20:55:38 +00:00
Then ( func ( ctx context . Context ) error {
if len ( rc . ServiceContainers ) > 0 {
logger . Infof ( "Cleaning up services for job %s" , rc . JobName )
if err := rc . stopServiceContainers ( ) ( ctx ) ; err != nil {
logger . Errorf ( "Error while cleaning services: %v" , err )
}
2025-07-22 06:08:31 +00:00
}
2025-08-14 10:21:08 +02:00
if rc . getNetworkCreated ( ctx ) {
logger . Infof ( "Cleaning up network for job %s, and network name is: %s" , rc . JobName , rc . getNetworkName ( ctx ) )
if err := container . NewDockerNetworkRemoveExecutor ( rc . getNetworkName ( ctx ) ) ( ctx ) ; err != nil {
2025-07-22 06:08:31 +00:00
logger . Errorf ( "Error while cleaning network: %v" , err )
2025-07-15 20:55:38 +00:00
}
}
return nil
} ) ( ctx )
}
return nil
}
rc . JobContainer = container . NewContainer ( & container . NewContainerInput {
Cmd : nil ,
Entrypoint : [ ] string { "tail" , "-f" , "/dev/null" } ,
WorkingDir : ext . ToContainerPath ( rc . Config . Workdir ) ,
Image : image ,
Username : username ,
Password : password ,
Name : name ,
Env : envList ,
ToolCache : rc . getToolCache ( ctx ) ,
Mounts : mounts ,
2025-08-14 10:21:08 +02:00
NetworkMode : rc . getNetworkName ( ctx ) ,
2025-07-22 07:54:10 +00:00
NetworkAliases : [ ] string { sanitizeNetworkAlias ( ctx , rc . Name ) } ,
2025-07-15 20:55:38 +00:00
Binds : binds ,
Stdout : logWriter ,
Stderr : logWriter ,
Privileged : rc . Config . Privileged ,
UsernsMode : rc . Config . UsernsMode ,
Platform : rc . Config . ContainerArchitecture ,
2025-08-12 10:09:42 +00:00
ValidVolumes : validVolumes ,
2025-07-15 20:55:38 +00:00
JobOptions : rc . options ( ctx ) ,
ConfigOptions : rc . Config . ContainerOptions ,
} )
if rc . JobContainer == nil {
return errors . New ( "Failed to create job container" )
}
return nil
}
func ( rc * RunContext ) startJobContainer ( ) common . Executor {
return func ( ctx context . Context ) error {
if err := rc . prepareJobContainer ( ctx ) ; err != nil {
return err
}
2025-06-11 14:57:23 +00:00
networkConfig := network . CreateOptions {
2023-11-14 22:02:37 +00:00
Driver : "bridge" ,
Scope : "local" ,
2025-06-11 14:57:23 +00:00
EnableIPv6 : & rc . Config . ContainerNetworkEnableIPv6 ,
2023-11-14 22:02:37 +00:00
}
2020-02-23 15:01:25 -08:00
return common . NewPipelineExecutor (
2023-04-19 11:23:28 +08:00
rc . pullServicesImages ( rc . Config . ForcePull ) ,
2020-02-23 15:01:25 -08:00
rc . JobContainer . Pull ( rc . Config . ForcePull ) ,
2020-06-18 17:21:55 +02:00
rc . stopJobContainer ( ) ,
2025-08-14 10:21:08 +02:00
container . NewDockerNetworkCreateExecutor ( rc . getNetworkName ( ctx ) , & networkConfig ) . IfBool ( ! rc . IsHostEnv ( ctx ) && rc . Config . ContainerNetworkMode == "" ) , // if the value of `ContainerNetworkMode` is empty string, then will create a new network for containers.
rc . startServiceContainers ( rc . getNetworkName ( ctx ) ) ,
2021-06-04 09:06:59 -07:00
rc . JobContainer . Create ( rc . Config . ContainerCapAdd , rc . Config . ContainerCapDrop ) ,
2020-02-23 15:01:25 -08:00
rc . JobContainer . Start ( false ) ,
2022-11-16 22:29:45 +01:00
rc . JobContainer . Copy ( rc . JobContainer . GetActPath ( ) + "/" , & container . FileEntry {
2020-02-23 15:01:25 -08:00
Name : "workflow/event.json" ,
2023-02-16 23:34:51 +08:00
Mode : 0 o644 ,
2020-02-23 15:01:25 -08:00
Body : rc . EventJSON ,
2021-01-12 07:39:43 +01:00
} , & container . FileEntry {
Name : "workflow/envs.txt" ,
2023-02-16 23:34:51 +08:00
Mode : 0 o666 ,
2021-05-06 15:30:12 +02:00
Body : "" ,
2020-02-23 15:01:25 -08:00
} ) ,
2025-08-07 04:36:26 +00:00
rc . waitForServiceContainers ( ) ,
2020-02-23 15:01:25 -08:00
) ( ctx )
}
}
2021-08-10 19:40:20 +00:00
2025-07-12 07:53:43 +02:00
func ( rc * RunContext ) sh ( ctx context . Context , script string ) ( stdout , stderr string , err error ) {
timeed , cancel := context . WithTimeout ( ctx , time . Minute )
defer cancel ( )
hout := & bytes . Buffer { }
herr := & bytes . Buffer { }
env := map [ string ] string { }
2025-08-15 04:54:13 +00:00
maps . Copy ( env , rc . Env )
2025-07-12 07:53:43 +02:00
2025-08-14 15:46:01 +00:00
base := common . MustRandName ( 8 )
2025-07-12 07:53:43 +02:00
name := base + ".sh"
oldStdout , oldStderr := rc . JobContainer . ReplaceLogWriter ( hout , herr )
err = rc . JobContainer . Copy ( rc . JobContainer . GetActPath ( ) , & container . FileEntry {
Name : name ,
Mode : 0 o644 ,
Body : script ,
} ) .
Then ( rc . execJobContainer ( [ ] string { "sh" , path . Join ( rc . JobContainer . GetActPath ( ) , name ) } ,
env , "" , "" ) ) .
Finally ( func ( context . Context ) error {
rc . JobContainer . ReplaceLogWriter ( oldStdout , oldStderr )
return nil
} ) ( timeed )
if err != nil {
return "" , "" , err
}
stdout = hout . String ( )
stderr = herr . String ( )
return stdout , stderr , nil
}
2021-08-10 19:40:20 +00:00
func ( rc * RunContext ) execJobContainer ( cmd [ ] string , env map [ string ] string , user , workdir string ) common . Executor {
2020-02-23 15:01:25 -08:00
return func ( ctx context . Context ) error {
2021-08-10 19:40:20 +00:00
return rc . JobContainer . Exec ( cmd , env , user , workdir ) ( ctx )
2020-02-23 15:01:25 -08:00
}
}
2020-06-18 17:21:55 +02:00
2023-02-04 14:35:13 +01:00
func ( rc * RunContext ) ApplyExtraPath ( ctx context . Context , env * map [ string ] string ) {
2024-12-27 09:27:41 +01:00
if len ( rc . ExtraPath ) > 0 {
2022-12-09 12:16:15 +01:00
path := rc . JobContainer . GetPathVariableName ( )
2023-04-18 20:09:57 +02:00
if rc . JobContainer . IsEnvironmentCaseInsensitive ( ) {
// On windows system Path and PATH could also be in the map
for k := range * env {
if strings . EqualFold ( path , k ) {
path = k
break
}
}
}
2022-12-09 12:16:15 +01:00
if ( * env ) [ path ] == "" {
2023-02-04 14:35:13 +01:00
cenv := map [ string ] string { }
var cpath string
if err := rc . JobContainer . UpdateFromImageEnv ( & cenv ) ( ctx ) ; err == nil {
if p , ok := cenv [ path ] ; ok {
cpath = p
}
}
if len ( cpath ) == 0 {
cpath = rc . JobContainer . DefaultPathVariable ( )
}
( * env ) [ path ] = cpath
2022-12-09 12:16:15 +01:00
}
( * env ) [ path ] = rc . JobContainer . JoinPathVariable ( append ( rc . ExtraPath , ( * env ) [ path ] ) ... )
}
}
func ( rc * RunContext ) UpdateExtraPath ( ctx context . Context , githubEnvPath string ) error {
if common . Dryrun ( ctx ) {
return nil
}
pathTar , err := rc . JobContainer . GetContainerArchive ( ctx , githubEnvPath )
if err != nil {
return err
}
defer pathTar . Close ( )
reader := tar . NewReader ( pathTar )
_ , err = reader . Next ( )
if err != nil && err != io . EOF {
return err
}
s := bufio . NewScanner ( reader )
for s . Scan ( ) {
line := s . Text ( )
if len ( line ) > 0 {
rc . addPath ( ctx , line )
}
}
return nil
}
Add support for service containers (#1949)
* Support services (#42)
Removed createSimpleContainerName and AutoRemove flag
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/42
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services options (#45)
Reviewed-on: https://gitea.com/gitea/act/pulls/45
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support intepolation for `env` of `services` (#47)
Reviewed-on: https://gitea.com/gitea/act/pulls/47
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services `credentials` (#51)
If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials).
Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code: https://gitea.com/gitea/act/src/commit/ba7ef95f06fe237175a1495979479eb185260135/pkg/runner/run_context.go#L228-L269
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/51
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Add ContainerMaxLifetime and ContainerNetworkMode options
from: https://gitea.com/gitea/act/commit/1d92791718bcb426f2a95f90dc1baf0045b89eb2
* Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
- If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
- Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
- On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat.
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
* Check volumes (#60)
This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config.
Options related to volumes:
- [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes)
- [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes)
In addition, volumes specified by `options` will also be checked.
Currently, the following default volumes (see https://gitea.com/gitea/act/src/commit/f78a6206d3204d14f6b1ad97c1771385236d52e3/pkg/runner/run_context.go#L116-L166) will be added to `ValidVolumes`:
- `act-toolcache`
- `<container-name>` and `<container-name>-env`
- `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted)
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/60
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Remove ContainerMaxLifetime; fix lint
* Remove unused ValidVolumes
* Remove ConnectToNetwork
* Add docker stubs
* Close docker clients to prevent file descriptor leaks
* Fix the error when removing network in self-hosted mode (#69)
Fixes https://gitea.com/gitea/act_runner/issues/255
Reviewed-on: https://gitea.com/gitea/act/pulls/69
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Move service container and network cleanup to rc.cleanUpJobContainer
* Add --network flag; default to host if not using service containers or set explicitly
* Correctly close executor to prevent fd leak
* Revert to tail instead of full path
* fix network duplication
* backport networkingConfig for aliaes
* don't hardcode netMode host
* Convert services test to table driven tests
* Add failing tests for services
* Expose service container ports onto the host
* Set container network mode in artifacts server test to host mode
* Log container network mode when creating/starting a container
* fix: Correctly handle ContainerNetworkMode
* fix: missing service container network
* Always remove service containers
Although we usually keep containers running if the workflow errored
(unless `--rm` is given) in order to facilitate debugging and we have
a flag (`--reuse`) to always keep containers running in order to speed
up repeated `act` invocations, I believe that these should only apply
to job containers and not service containers, because changing the
network settings on a service container requires re-creating it anyway.
* Remove networks only if no active endpoints exist
* Ensure job containers are stopped before starting a new job
* fix: go build -tags WITHOUT_DOCKER
---------
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
2023-10-19 02:24:52 -07:00
// stopJobContainer removes the job container (if it exists) and its volume (if it exists)
2020-02-23 15:01:25 -08:00
func ( rc * RunContext ) stopJobContainer ( ) common . Executor {
return func ( ctx context . Context ) error {
Add support for service containers (#1949)
* Support services (#42)
Removed createSimpleContainerName and AutoRemove flag
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/42
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services options (#45)
Reviewed-on: https://gitea.com/gitea/act/pulls/45
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support intepolation for `env` of `services` (#47)
Reviewed-on: https://gitea.com/gitea/act/pulls/47
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services `credentials` (#51)
If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials).
Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code: https://gitea.com/gitea/act/src/commit/ba7ef95f06fe237175a1495979479eb185260135/pkg/runner/run_context.go#L228-L269
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/51
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Add ContainerMaxLifetime and ContainerNetworkMode options
from: https://gitea.com/gitea/act/commit/1d92791718bcb426f2a95f90dc1baf0045b89eb2
* Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
- If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
- Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
- On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat.
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
* Check volumes (#60)
This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config.
Options related to volumes:
- [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes)
- [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes)
In addition, volumes specified by `options` will also be checked.
Currently, the following default volumes (see https://gitea.com/gitea/act/src/commit/f78a6206d3204d14f6b1ad97c1771385236d52e3/pkg/runner/run_context.go#L116-L166) will be added to `ValidVolumes`:
- `act-toolcache`
- `<container-name>` and `<container-name>-env`
- `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted)
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/60
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Remove ContainerMaxLifetime; fix lint
* Remove unused ValidVolumes
* Remove ConnectToNetwork
* Add docker stubs
* Close docker clients to prevent file descriptor leaks
* Fix the error when removing network in self-hosted mode (#69)
Fixes https://gitea.com/gitea/act_runner/issues/255
Reviewed-on: https://gitea.com/gitea/act/pulls/69
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Move service container and network cleanup to rc.cleanUpJobContainer
* Add --network flag; default to host if not using service containers or set explicitly
* Correctly close executor to prevent fd leak
* Revert to tail instead of full path
* fix network duplication
* backport networkingConfig for aliaes
* don't hardcode netMode host
* Convert services test to table driven tests
* Add failing tests for services
* Expose service container ports onto the host
* Set container network mode in artifacts server test to host mode
* Log container network mode when creating/starting a container
* fix: Correctly handle ContainerNetworkMode
* fix: missing service container network
* Always remove service containers
Although we usually keep containers running if the workflow errored
(unless `--rm` is given) in order to facilitate debugging and we have
a flag (`--reuse`) to always keep containers running in order to speed
up repeated `act` invocations, I believe that these should only apply
to job containers and not service containers, because changing the
network settings on a service container requires re-creating it anyway.
* Remove networks only if no active endpoints exist
* Ensure job containers are stopped before starting a new job
* fix: go build -tags WITHOUT_DOCKER
---------
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
2023-10-19 02:24:52 -07:00
if rc . cleanUpJobContainer != nil {
2022-11-16 22:29:45 +01:00
return rc . cleanUpJobContainer ( ctx )
2020-02-23 15:01:25 -08:00
}
return nil
}
}
2023-04-19 11:23:28 +08:00
func ( rc * RunContext ) pullServicesImages ( forcePull bool ) common . Executor {
return func ( ctx context . Context ) error {
execs := [ ] common . Executor { }
for _ , c := range rc . ServiceContainers {
execs = append ( execs , c . Pull ( forcePull ) )
}
return common . NewParallelExecutor ( len ( execs ) , execs ... ) ( ctx )
}
}
Add support for service containers (#1949)
* Support services (#42)
Removed createSimpleContainerName and AutoRemove flag
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/42
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services options (#45)
Reviewed-on: https://gitea.com/gitea/act/pulls/45
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support intepolation for `env` of `services` (#47)
Reviewed-on: https://gitea.com/gitea/act/pulls/47
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services `credentials` (#51)
If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials).
Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code: https://gitea.com/gitea/act/src/commit/ba7ef95f06fe237175a1495979479eb185260135/pkg/runner/run_context.go#L228-L269
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/51
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Add ContainerMaxLifetime and ContainerNetworkMode options
from: https://gitea.com/gitea/act/commit/1d92791718bcb426f2a95f90dc1baf0045b89eb2
* Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
- If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
- Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
- On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat.
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
* Check volumes (#60)
This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config.
Options related to volumes:
- [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes)
- [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes)
In addition, volumes specified by `options` will also be checked.
Currently, the following default volumes (see https://gitea.com/gitea/act/src/commit/f78a6206d3204d14f6b1ad97c1771385236d52e3/pkg/runner/run_context.go#L116-L166) will be added to `ValidVolumes`:
- `act-toolcache`
- `<container-name>` and `<container-name>-env`
- `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted)
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/60
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Remove ContainerMaxLifetime; fix lint
* Remove unused ValidVolumes
* Remove ConnectToNetwork
* Add docker stubs
* Close docker clients to prevent file descriptor leaks
* Fix the error when removing network in self-hosted mode (#69)
Fixes https://gitea.com/gitea/act_runner/issues/255
Reviewed-on: https://gitea.com/gitea/act/pulls/69
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Move service container and network cleanup to rc.cleanUpJobContainer
* Add --network flag; default to host if not using service containers or set explicitly
* Correctly close executor to prevent fd leak
* Revert to tail instead of full path
* fix network duplication
* backport networkingConfig for aliaes
* don't hardcode netMode host
* Convert services test to table driven tests
* Add failing tests for services
* Expose service container ports onto the host
* Set container network mode in artifacts server test to host mode
* Log container network mode when creating/starting a container
* fix: Correctly handle ContainerNetworkMode
* fix: missing service container network
* Always remove service containers
Although we usually keep containers running if the workflow errored
(unless `--rm` is given) in order to facilitate debugging and we have
a flag (`--reuse`) to always keep containers running in order to speed
up repeated `act` invocations, I believe that these should only apply
to job containers and not service containers, because changing the
network settings on a service container requires re-creating it anyway.
* Remove networks only if no active endpoints exist
* Ensure job containers are stopped before starting a new job
* fix: go build -tags WITHOUT_DOCKER
---------
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
2023-10-19 02:24:52 -07:00
func ( rc * RunContext ) startServiceContainers ( _ string ) common . Executor {
2023-04-19 11:23:28 +08:00
return func ( ctx context . Context ) error {
execs := [ ] common . Executor { }
for _ , c := range rc . ServiceContainers {
execs = append ( execs , common . NewPipelineExecutor (
c . Pull ( false ) ,
c . Create ( rc . Config . ContainerCapAdd , rc . Config . ContainerCapDrop ) ,
c . Start ( false ) ,
) )
}
return common . NewParallelExecutor ( len ( execs ) , execs ... ) ( ctx )
}
}
2025-08-07 04:36:26 +00:00
func waitForServiceContainer ( ctx context . Context , c container . ExecutionsEnvironment ) error {
for {
wait , err := c . IsHealthy ( ctx )
if err != nil {
return err
}
if wait == time . Duration ( 0 ) {
return nil
}
select {
case <- ctx . Done ( ) :
return nil
case <- time . After ( wait ) :
}
}
}
func ( rc * RunContext ) waitForServiceContainers ( ) common . Executor {
return func ( ctx context . Context ) error {
execs := [ ] common . Executor { }
for _ , c := range rc . ServiceContainers {
execs = append ( execs , func ( ctx context . Context ) error {
return waitForServiceContainer ( ctx , c )
} )
}
return common . NewParallelExecutor ( len ( execs ) , execs ... ) ( ctx )
}
}
2023-04-19 11:23:28 +08:00
func ( rc * RunContext ) stopServiceContainers ( ) common . Executor {
return func ( ctx context . Context ) error {
execs := [ ] common . Executor { }
for _ , c := range rc . ServiceContainers {
Add support for service containers (#1949)
* Support services (#42)
Removed createSimpleContainerName and AutoRemove flag
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/42
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services options (#45)
Reviewed-on: https://gitea.com/gitea/act/pulls/45
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support intepolation for `env` of `services` (#47)
Reviewed-on: https://gitea.com/gitea/act/pulls/47
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Support services `credentials` (#51)
If a service's image is from a container registry requires authentication, `act_runner` will need `credentials` to pull the image, see [documentation](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idcredentials).
Currently, `act_runner` incorrectly uses the `credentials` of `containers` to pull services' images and the `credentials` of services won't be used, see the related code: https://gitea.com/gitea/act/src/commit/ba7ef95f06fe237175a1495979479eb185260135/pkg/runner/run_context.go#L228-L269
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/51
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Add ContainerMaxLifetime and ContainerNetworkMode options
from: https://gitea.com/gitea/act/commit/1d92791718bcb426f2a95f90dc1baf0045b89eb2
* Fix container network issue (#56)
Follow: https://gitea.com/gitea/act_runner/pulls/184
Close https://gitea.com/gitea/act_runner/issues/177
- `act` create new networks only if the value of `NeedCreateNetwork` is true, and remove these networks at last. `NeedCreateNetwork` is passed by `act_runner`. 'NeedCreateNetwork' is true only if `container.network` in the configuration file of the `act_runner` is empty.
- In the `docker create` phase, specify the network to which containers will connect. Because, if not specify , container will connect to `bridge` network which is created automatically by Docker.
- If the network is user defined network ( the value of `container.network` is empty or `<custom-network>`. Because, the network created by `act` is also user defined network.), will also specify alias by `--network-alias`. The alias of service is `<service-id>`. So we can be access service container by `<service-id>:<port>` in the steps of job.
- Won't try to `docker network connect ` network after `docker start` any more.
- Because on the one hand, `docker network connect` applies only to user defined networks, if try to `docker network connect host <container-name>` will return error.
- On the other hand, we just specify network in the stage of `docker create`, the same effect can be achieved.
- Won't try to remove containers and networks berfore the stage of `docker start`, because the name of these containers and netwoks won't be repeat.
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/56
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-committed-by: sillyguodong <gedong_1994@163.com>
* Check volumes (#60)
This PR adds a `ValidVolumes` config. Users can specify the volumes (including bind mounts) that can be mounted to containers by this config.
Options related to volumes:
- [jobs.<job_id>.container.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idcontainervolumes)
- [jobs.<job_id>.services.<service_id>.volumes](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idservicesservice_idvolumes)
In addition, volumes specified by `options` will also be checked.
Currently, the following default volumes (see https://gitea.com/gitea/act/src/commit/f78a6206d3204d14f6b1ad97c1771385236d52e3/pkg/runner/run_context.go#L116-L166) will be added to `ValidVolumes`:
- `act-toolcache`
- `<container-name>` and `<container-name>-env`
- `/var/run/docker.sock` (We need to add a new configuration to control whether the docker daemon can be mounted)
Co-authored-by: Jason Song <i@wolfogre.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/60
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Remove ContainerMaxLifetime; fix lint
* Remove unused ValidVolumes
* Remove ConnectToNetwork
* Add docker stubs
* Close docker clients to prevent file descriptor leaks
* Fix the error when removing network in self-hosted mode (#69)
Fixes https://gitea.com/gitea/act_runner/issues/255
Reviewed-on: https://gitea.com/gitea/act/pulls/69
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
* Move service container and network cleanup to rc.cleanUpJobContainer
* Add --network flag; default to host if not using service containers or set explicitly
* Correctly close executor to prevent fd leak
* Revert to tail instead of full path
* fix network duplication
* backport networkingConfig for aliaes
* don't hardcode netMode host
* Convert services test to table driven tests
* Add failing tests for services
* Expose service container ports onto the host
* Set container network mode in artifacts server test to host mode
* Log container network mode when creating/starting a container
* fix: Correctly handle ContainerNetworkMode
* fix: missing service container network
* Always remove service containers
Although we usually keep containers running if the workflow errored
(unless `--rm` is given) in order to facilitate debugging and we have
a flag (`--reuse`) to always keep containers running in order to speed
up repeated `act` invocations, I believe that these should only apply
to job containers and not service containers, because changing the
network settings on a service container requires re-creating it anyway.
* Remove networks only if no active endpoints exist
* Ensure job containers are stopped before starting a new job
* fix: go build -tags WITHOUT_DOCKER
---------
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jason Song <i@wolfogre.com>
Co-authored-by: sillyguodong <gedong_1994@163.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-authored-by: ZauberNerd <zaubernerd@zaubernerd.de>
2023-10-19 02:24:52 -07:00
execs = append ( execs , c . Remove ( ) . Finally ( c . Close ( ) ) )
2023-04-19 11:23:28 +08:00
}
return common . NewParallelExecutor ( len ( execs ) , execs ... ) ( ctx )
}
}
2021-05-04 14:50:35 -07:00
// Prepare the mounts and binds for the worker
2020-02-24 10:56:49 -08:00
// ActionCacheDir is for rc
func ( rc * RunContext ) ActionCacheDir ( ) string {
2023-06-16 03:41:39 +00:00
if rc . Config . ActionCacheDir != "" {
return rc . Config . ActionCacheDir
}
2020-02-23 22:34:48 -08:00
var xdgCache string
var ok bool
2021-04-05 17:51:13 +02:00
if xdgCache , ok = os . LookupEnv ( "XDG_CACHE_HOME" ) ; ! ok || xdgCache == "" {
2023-03-31 21:08:46 +08:00
if home , err := os . UserHomeDir ( ) ; err == nil {
2021-04-05 17:51:13 +02:00
xdgCache = filepath . Join ( home , ".cache" )
} else if xdgCache , err = filepath . Abs ( "." ) ; err != nil {
2023-04-18 22:17:36 +08:00
// It's almost impossible to get here, so the temp dir is a good fallback
xdgCache = os . TempDir ( )
2020-02-23 22:34:48 -08:00
}
}
return filepath . Join ( xdgCache , "act" )
}
2021-11-24 16:49:08 +01:00
// Interpolate outputs after a job is done
func ( rc * RunContext ) interpolateOutputs ( ) common . Executor {
return func ( ctx context . Context ) error {
2022-06-17 17:55:21 +02:00
ee := rc . NewExpressionEvaluator ( ctx )
2021-11-24 16:49:08 +01:00
for k , v := range rc . Run . Job ( ) . Outputs {
2022-06-17 17:55:21 +02:00
interpolated := ee . Interpolate ( ctx , v )
2021-11-24 16:49:08 +01:00
if v != interpolated {
rc . Run . Job ( ) . Outputs [ k ] = interpolated
}
}
return nil
}
}
2025-07-13 21:55:02 +00:00
func ( rc * RunContext ) getToolCache ( ctx context . Context ) string {
if value , ok := rc . Config . Env [ "RUNNER_TOOL_CACHE" ] ; ok {
return value
}
if rc . IsHostEnv ( ctx ) {
return filepath . Join ( rc . ActionCacheDir ( ) , "tool_cache" )
}
return "/opt/hostedtoolcache"
}
2022-02-08 18:22:41 +01:00
func ( rc * RunContext ) startContainer ( ) common . Executor {
2022-11-16 22:29:45 +01:00
return func ( ctx context . Context ) error {
2023-03-14 19:37:31 +05:30
if rc . IsHostEnv ( ctx ) {
2022-11-16 22:29:45 +01:00
return rc . startHostEnvironment ( ) ( ctx )
}
return rc . startJobContainer ( ) ( ctx )
}
2022-02-08 18:22:41 +01:00
}
2020-02-26 23:29:43 -08:00
2023-11-09 03:36:09 +01:00
func ( rc * RunContext ) IsBareHostEnv ( ctx context . Context ) bool {
2023-05-03 19:49:17 +02:00
platform := rc . runsOnImage ( ctx )
image := rc . containerImage ( ctx )
return image == "" && strings . EqualFold ( platform , "-self-hosted" )
2023-03-14 19:37:31 +05:30
}
2023-11-10 21:45:14 +01:00
const lxcPrefix = "lxc:"
2023-11-09 03:36:09 +01:00
func ( rc * RunContext ) IsLXCHostEnv ( ctx context . Context ) bool {
platform := rc . runsOnImage ( ctx )
2023-11-10 21:45:14 +01:00
return strings . HasPrefix ( platform , lxcPrefix )
}
func ( rc * RunContext ) GetLXCInfo ( ctx context . Context ) ( isLXC bool , template , release , config string ) {
platform := rc . runsOnImage ( ctx )
if ! strings . HasPrefix ( platform , lxcPrefix ) {
2025-07-28 12:26:41 +00:00
return isLXC , template , release , config
2023-11-10 21:45:14 +01:00
}
isLXC = true
s := strings . SplitN ( strings . TrimPrefix ( platform , lxcPrefix ) , ":" , 3 )
template = s [ 0 ]
if len ( s ) > 1 {
release = s [ 1 ]
}
if len ( s ) > 2 {
config = s [ 2 ]
}
2025-07-28 12:26:41 +00:00
return isLXC , template , release , config
2023-11-09 03:36:09 +01:00
}
func ( rc * RunContext ) IsHostEnv ( ctx context . Context ) bool {
return rc . IsBareHostEnv ( ctx ) || rc . IsLXCHostEnv ( ctx )
}
2022-02-08 18:22:41 +01:00
func ( rc * RunContext ) stopContainer ( ) common . Executor {
2023-03-12 14:26:24 +01:00
return func ( ctx context . Context ) error {
return rc . stopJobContainer ( ) ( ctx )
}
2022-02-08 18:22:41 +01:00
}
2020-02-06 22:17:58 -08:00
2022-02-08 18:22:41 +01:00
func ( rc * RunContext ) closeContainer ( ) common . Executor {
return func ( ctx context . Context ) error {
if rc . JobContainer != nil {
return rc . JobContainer . Close ( ) ( ctx )
2020-02-10 15:27:05 -08:00
}
2022-02-08 18:22:41 +01:00
return nil
2020-02-23 15:01:25 -08:00
}
2022-02-08 18:22:41 +01:00
}
2021-12-08 21:57:42 +01:00
2025-08-15 04:54:13 +00:00
func ( rc * RunContext ) matrix ( ) map [ string ] any {
2022-02-08 18:22:41 +01:00
return rc . Matrix
}
2021-12-08 21:57:42 +01:00
2022-02-08 18:22:41 +01:00
func ( rc * RunContext ) result ( result string ) {
rc . Run . Job ( ) . Result = result
}
2020-02-17 10:11:16 -08:00
2022-02-08 18:22:41 +01:00
func ( rc * RunContext ) steps ( ) [ ] * model . Step {
return rc . Run . Job ( ) . Steps
}
// Executor returns a pipeline executor for all the steps in the job
2023-07-11 00:27:43 -04:00
func ( rc * RunContext ) Executor ( ) ( common . Executor , error ) {
2022-12-15 17:45:22 +01:00
var executor common . Executor
2023-03-12 14:26:24 +01:00
jobType , err := rc . Run . Job ( ) . Type ( )
2022-12-15 17:45:22 +01:00
2023-07-11 00:27:43 -04:00
switch jobType {
2022-12-15 17:45:22 +01:00
case model . JobTypeDefault :
executor = newJobExecutor ( rc , & stepFactoryImpl { } , rc )
case model . JobTypeReusableWorkflowLocal :
executor = newLocalReusableWorkflowExecutor ( rc )
case model . JobTypeReusableWorkflowRemote :
executor = newRemoteReusableWorkflowExecutor ( rc )
2023-07-11 00:27:43 -04:00
case model . JobTypeInvalid :
return nil , err
2022-12-15 17:45:22 +01:00
}
2022-02-25 19:39:50 +01:00
return func ( ctx context . Context ) error {
2022-12-15 17:45:22 +01:00
res , err := rc . isEnabled ( ctx )
2022-02-25 19:39:50 +01:00
if err != nil {
return err
}
2022-12-15 17:45:22 +01:00
if res {
fix: improve logging to diagnose mystery job terminations (#1048)
Additional logging to support #1044.
Manual testing only. Cases tested:
Cancel a job from Forgejo UI; this seems like the most likely missing piece in #1044 as two jobs were simultaneously marked as "Failed". There are codepaths in Forgejo that can set this state to both cancelled and failed, but the runner didn't provide log output indicating that's why a job was stopping:
```
time="2025-10-02T13:22:53-06:00" level=info msg="UpdateTask returned task result RESULT_CANCELLED for a task that was in local state RESULT_UNSPECIFIED - beginning local task termination" func="[ReportState]" file="[reporter.go:410]"
```
Host-based executor hits step timeout in exec, or, is cancelled. This occurred but only logged the `err` from `exec`, not the context error indicating whether it was a timeout or a cancellation:
```
[Test Action/job1] this step has been cancelled: ctx: context deadline exceeded, exec: RUN signal: killed
[Test Action/job1] this step has been cancelled: ctx: context canceled, exec: RUN signal: killed
```
Unable to `ReportState` due to Forgejo inaccessible. If the runner isn't able to update state to Forgejo a job could be considered a zombie; this would trigger one of the codepaths where the job would be marked as failed. If connectivity was later restored, then the runner could identify it was marked as failed and cancel the job context. (This combination doesn't seem likely, but, I think it's reasonable to consider these failures as warnings because there may be unexpected errors here that we're not aware of).
```
time="2025-10-02T13:27:19-06:00" level=warning msg="ReportState error: unavailable: 502 Bad Gateway" func="[RunDaemon]" file="[reporter.go:207]"
```
Runner shutdown logging; just changed up to `Info` level:
```
time="2025-10-02T13:31:36-06:00" level=info msg="forcing the jobs to shutdown" func="[Shutdown]" file="[poller.go:93]"
[Test Action/job1] ❌ Failure - Main sleep 120
[Test Action/job1] this step has been cancelled: ctx: context canceled, exec: RUN signal: killed
```
<!--start release-notes-assistant-->
<!--URL:https://code.forgejo.org/forgejo/runner-->
- bug fixes
- [PR](https://code.forgejo.org/forgejo/runner/pulls/1048): <!--number 1048 --><!--line 0 --><!--description Zml4OiBpbXByb3ZlIGxvZ2dpbmcgdG8gZGlhZ25vc2UgbXlzdGVyeSBqb2IgdGVybWluYXRpb25z-->fix: improve logging to diagnose mystery job terminations<!--description-->
<!--end release-notes-assistant-->
Reviewed-on: https://code.forgejo.org/forgejo/runner/pulls/1048
Reviewed-by: earl-warren <earl-warren@noreply.code.forgejo.org>
Co-authored-by: Mathieu Fenniak <mathieu@fenniak.net>
Co-committed-by: Mathieu Fenniak <mathieu@fenniak.net>
2025-10-02 22:43:50 +00:00
timeoutctx , cancelTimeOut := evaluateTimeout ( ctx , "job" , rc . ExprEval , rc . Run . Job ( ) . TimeoutMinutes )
2025-09-11 14:43:26 +00:00
defer cancelTimeOut ( )
return executor ( timeoutctx )
2022-02-25 19:39:50 +01:00
}
return nil
2023-07-11 00:27:43 -04:00
} , nil
2020-02-23 15:01:25 -08:00
}
2020-02-17 10:11:16 -08:00
2023-05-03 19:49:17 +02:00
func ( rc * RunContext ) containerImage ( ctx context . Context ) string {
2020-03-17 08:58:10 +11:00
job := rc . Run . Job ( )
c := job . Container ( )
if c != nil {
2022-06-17 17:55:21 +02:00
return rc . ExprEval . Interpolate ( ctx , c . Image )
2020-03-17 08:58:10 +11:00
}
2023-05-03 19:49:17 +02:00
return ""
}
func ( rc * RunContext ) runsOnImage ( ctx context . Context ) string {
2023-12-16 15:04:54 -08:00
if rc . Run . Job ( ) . RunsOn ( ) == nil {
2022-06-17 17:55:21 +02:00
common . Logger ( ctx ) . Errorf ( "'runs-on' key not defined in %s" , rc . String ( ) )
2021-01-21 14:02:48 +00:00
}
2024-02-16 23:28:10 -05:00
runsOn := rc . Run . Job ( ) . RunsOn ( )
2022-11-22 16:39:19 +08:00
for i , v := range runsOn {
runsOn [ i ] = rc . ExprEval . Interpolate ( ctx , v )
}
if pick := rc . Config . PlatformPicker ; pick != nil {
if image := pick ( runsOn ) ; image != "" {
return image
}
}
2023-11-12 12:40:06 -05:00
for _ , platformName := range rc . runsOnPlatformNames ( ctx ) {
image := rc . Config . Platforms [ strings . ToLower ( platformName ) ]
2020-03-17 08:58:10 +11:00
if image != "" {
return image
}
}
return ""
}
2023-11-12 12:40:06 -05:00
func ( rc * RunContext ) runsOnPlatformNames ( ctx context . Context ) [ ] string {
2023-05-03 19:49:17 +02:00
job := rc . Run . Job ( )
2021-01-21 14:02:48 +00:00
if job . RunsOn ( ) == nil {
2023-11-12 12:40:06 -05:00
return [ ] string { }
2021-01-21 14:02:48 +00:00
}
2025-08-16 20:44:40 +00:00
// Copy rawRunsOn from the job. `EvaluateYamlNode` later will mutate the yaml node in-place applying expression
// evaluation to it from the RunContext -- but the job object is shared in matrix executions between multiple
// running matrix jobs and `rc.EvalExpr` is specific to one matrix job. By copying the object we avoid mutating the
// shared field as it is accessed by multiple goroutines.
rawRunsOn := job . RawRunsOn
if err := rc . ExprEval . EvaluateYamlNode ( ctx , & rawRunsOn ) ; err != nil {
2023-11-12 12:40:06 -05:00
common . Logger ( ctx ) . Errorf ( "Error while evaluating runs-on: %v" , err )
return [ ] string { }
2020-03-17 08:58:10 +11:00
}
2025-08-16 20:44:40 +00:00
return model . FlattenRunsOnNode ( rawRunsOn )
2020-03-17 08:58:10 +11:00
}
2023-05-03 19:49:17 +02:00
func ( rc * RunContext ) platformImage ( ctx context . Context ) string {
if containerImage := rc . containerImage ( ctx ) ; containerImage != "" {
return containerImage
}
return rc . runsOnImage ( ctx )
}
2023-09-12 16:35:25 +03:00
func ( rc * RunContext ) options ( ctx context . Context ) string {
2021-09-10 07:03:40 +02:00
job := rc . Run . Job ( )
c := job . Container ( )
2023-09-12 16:35:25 +03:00
if c != nil {
2024-12-25 15:21:09 +01:00
return rc . ExprEval . Interpolate ( ctx , c . Options )
2021-09-10 07:03:40 +02:00
}
2024-12-25 15:21:09 +01:00
return ""
2021-09-10 07:03:40 +02:00
}
2022-02-25 19:39:50 +01:00
func ( rc * RunContext ) isEnabled ( ctx context . Context ) ( bool , error ) {
2020-02-23 15:01:25 -08:00
job := rc . Run . Job ( )
2020-05-04 21:18:13 +02:00
l := common . Logger ( ctx )
2025-08-15 19:19:54 +00:00
runJob , runJobErr := EvalBool ( ctx , rc . ExprEval , job . IfClause ( ) , exprparser . DefaultStatusCheckSuccess )
2023-07-11 00:27:43 -04:00
jobType , jobTypeErr := job . Type ( )
if runJobErr != nil {
2025-08-15 19:19:54 +00:00
return false , fmt . Errorf ( " \u274C Error in if-expression: \"if: %s\" (%s)" , job . IfClause ( ) , runJobErr )
2023-07-11 00:27:43 -04:00
}
if jobType == model . JobTypeInvalid {
return false , jobTypeErr
2020-11-17 18:31:05 +01:00
}
2023-07-11 00:27:43 -04:00
2020-11-17 18:31:05 +01:00
if ! runJob {
2025-06-09 10:25:43 +00:00
rc . result ( "skipped" )
2025-08-15 19:19:54 +00:00
l . WithField ( "jobResult" , "skipped" ) . Infof ( "Skipping job '%s' due to '%s'" , job . Name , job . IfClause ( ) )
2022-02-25 19:39:50 +01:00
return false , nil
2020-02-23 15:01:25 -08:00
}
2020-02-20 22:43:20 -05:00
2023-11-12 15:01:32 -05:00
if jobType != model . JobTypeDefault {
return true , nil
}
2022-06-17 17:55:21 +02:00
img := rc . platformImage ( ctx )
2020-03-17 08:58:10 +11:00
if img == "" {
2023-11-12 12:40:06 -05:00
for _ , platformName := range rc . runsOnPlatformNames ( ctx ) {
2021-09-13 19:14:41 -04:00
l . Infof ( "\U0001F6A7 Skipping unsupported platform -- Try running with `-P %+v=...`" , platformName )
2020-12-08 19:13:07 +01:00
}
2022-02-25 19:39:50 +01:00
return false , nil
2020-02-17 10:11:16 -08:00
}
2022-02-25 19:39:50 +01:00
return true , nil
2020-02-17 10:11:16 -08:00
}
2025-08-15 04:54:13 +00:00
func mergeMaps ( args ... map [ string ] string ) map [ string ] string {
2020-02-06 22:17:58 -08:00
rtnMap := make ( map [ string ] string )
2025-08-15 04:54:13 +00:00
for _ , m := range args {
maps . Copy ( rtnMap , m )
2020-02-06 22:17:58 -08:00
}
return rtnMap
}
2024-12-27 09:27:41 +01:00
// Deprecated: use createSimpleContainerName
2020-02-23 15:01:25 -08:00
func createContainerName ( parts ... string ) string {
2023-02-03 11:54:19 -08:00
name := strings . Join ( parts , "-" )
2020-02-23 15:01:25 -08:00
pattern := regexp . MustCompile ( "[^a-zA-Z0-9]" )
2023-02-03 11:54:19 -08:00
name = pattern . ReplaceAllString ( name , "-" )
name = strings . ReplaceAll ( name , "--" , "-" )
hash := sha256 . Sum256 ( [ ] byte ( name ) )
// SHA256 is 64 hex characters. So trim name to 63 characters to make room for the hash and separator
trimmedName := strings . Trim ( trimToLen ( name , 63 ) , "-" )
return fmt . Sprintf ( "%s-%x" , trimmedName , hash )
2020-02-06 22:17:58 -08:00
}
2022-11-24 14:45:32 +08:00
func createSimpleContainerName ( parts ... string ) string {
pattern := regexp . MustCompile ( "[^a-zA-Z0-9-]" )
name := make ( [ ] string , 0 , len ( parts ) )
for _ , v := range parts {
v = pattern . ReplaceAllString ( v , "-" )
v = strings . Trim ( v , "-" )
for strings . Contains ( v , "--" ) {
v = strings . ReplaceAll ( v , "--" , "-" )
}
if v != "" {
name = append ( name , v )
}
}
return strings . Join ( name , "_" )
}
2020-02-06 22:17:58 -08:00
func trimToLen ( s string , l int ) string {
2020-02-20 22:43:20 -05:00
if l < 0 {
l = 0
}
2020-02-06 22:17:58 -08:00
if len ( s ) > l {
return s [ : l ]
}
return s
}
2020-02-14 00:41:20 -08:00
2021-12-22 20:52:09 +01:00
func ( rc * RunContext ) getJobContext ( ) * model . JobContext {
2020-02-14 00:41:20 -08:00
jobStatus := "success"
for _ , stepStatus := range rc . StepResults {
2021-12-22 20:52:09 +01:00
if stepStatus . Conclusion == model . StepStatusFailure {
2020-02-14 00:41:20 -08:00
jobStatus = "failure"
break
}
}
2021-12-22 20:52:09 +01:00
return & model . JobContext {
2020-02-14 00:41:20 -08:00
Status : jobStatus ,
}
}
2021-12-22 20:52:09 +01:00
func ( rc * RunContext ) getStepsContext ( ) map [ string ] * model . StepResult {
2020-02-14 00:41:20 -08:00
return rc . StepResults
}
2022-06-17 17:55:21 +02:00
func ( rc * RunContext ) getGithubContext ( ctx context . Context ) * model . GithubContext {
logger := common . Logger ( ctx )
2021-12-22 20:52:09 +01:00
ghc := & model . GithubContext {
2025-08-15 04:54:13 +00:00
Event : make ( map [ string ] any ) ,
2021-05-06 16:02:29 -04:00
Workflow : rc . Run . Workflow . Name ,
2025-08-07 08:02:28 +00:00
RunAttempt : rc . Config . Env [ "GITHUB_RUN_ATTEMPT" ] ,
2021-05-06 16:02:29 -04:00
RunID : rc . Config . Env [ "GITHUB_RUN_ID" ] ,
RunNumber : rc . Config . Env [ "GITHUB_RUN_NUMBER" ] ,
Actor : rc . Config . Actor ,
EventName : rc . Config . EventName ,
Action : rc . CurrentStep ,
2022-05-11 21:06:05 +02:00
Token : rc . Config . Token ,
2023-02-27 22:10:31 +03:00
Job : rc . Run . JobID ,
2021-12-22 20:19:50 +01:00
ActionPath : rc . ActionPath ,
2023-10-04 01:13:05 +02:00
ActionRepository : rc . Env [ "GITHUB_ACTION_REPOSITORY" ] ,
ActionRef : rc . Env [ "GITHUB_ACTION_REF" ] ,
2021-05-06 16:02:29 -04:00
RepositoryOwner : rc . Config . Env [ "GITHUB_REPOSITORY_OWNER" ] ,
RetentionDays : rc . Config . Env [ "GITHUB_RETENTION_DAYS" ] ,
RunnerPerflog : rc . Config . Env [ "RUNNER_PERFLOG" ] ,
RunnerTrackingID : rc . Config . Env [ "RUNNER_TRACKING_ID" ] ,
2023-02-03 14:35:49 -05:00
Repository : rc . Config . Env [ "GITHUB_REPOSITORY" ] ,
Ref : rc . Config . Env [ "GITHUB_REF" ] ,
Sha : rc . Config . Env [ "SHA_REF" ] ,
RefName : rc . Config . Env [ "GITHUB_REF_NAME" ] ,
RefType : rc . Config . Env [ "GITHUB_REF_TYPE" ] ,
BaseRef : rc . Config . Env [ "GITHUB_BASE_REF" ] ,
HeadRef : rc . Config . Env [ "GITHUB_HEAD_REF" ] ,
Workspace : rc . Config . Env [ "GITHUB_WORKSPACE" ] ,
2020-03-06 14:17:57 -08:00
}
2022-11-16 22:29:45 +01:00
if rc . JobContainer != nil {
ghc . EventPath = rc . JobContainer . GetActPath ( ) + "/workflow/event.json"
ghc . Workspace = rc . JobContainer . ToContainerPath ( rc . Config . Workdir )
}
2021-05-06 01:11:43 +02:00
2025-08-07 08:02:28 +00:00
if ghc . RunAttempt == "" {
ghc . RunAttempt = "1"
}
2021-05-06 16:02:29 -04:00
if ghc . RunID == "" {
ghc . RunID = "1"
2020-09-23 05:13:29 +08:00
}
2021-05-06 01:11:43 +02:00
2021-05-06 16:02:29 -04:00
if ghc . RunNumber == "" {
ghc . RunNumber = "1"
2020-09-23 05:13:29 +08:00
}
2021-05-06 01:11:43 +02:00
2021-05-06 16:02:29 -04:00
if ghc . RetentionDays == "" {
ghc . RetentionDays = "0"
}
if ghc . RunnerPerflog == "" {
ghc . RunnerPerflog = "/dev/null"
2020-02-14 00:41:20 -08:00
}
2020-05-12 08:14:56 +01:00
// Backwards compatibility for configs that require
// a default rather than being run as a cmd
if ghc . Actor == "" {
ghc . Actor = "nektos/act"
}
2023-03-16 11:45:29 +08:00
{ // Adapt to Gitea
if preset := rc . Config . PresetGitHubContext ; preset != nil {
ghc . Event = preset . Event
ghc . RunID = preset . RunID
ghc . RunNumber = preset . RunNumber
ghc . Actor = preset . Actor
ghc . Repository = preset . Repository
ghc . EventName = preset . EventName
ghc . Sha = preset . Sha
ghc . Ref = preset . Ref
ghc . RefName = preset . RefName
ghc . RefType = preset . RefType
ghc . HeadRef = preset . HeadRef
ghc . BaseRef = preset . BaseRef
ghc . Token = preset . Token
ghc . RepositoryOwner = preset . RepositoryOwner
ghc . RetentionDays = preset . RetentionDays
2023-06-16 05:12:43 +00:00
instance := rc . Config . GitHubInstance
if ! strings . HasPrefix ( instance , "http://" ) &&
! strings . HasPrefix ( instance , "https://" ) {
instance = "https://" + instance
}
ghc . ServerURL = instance
ghc . APIURL = instance + "/api/v1" // the version of Gitea is v1
ghc . GraphQLURL = "" // Gitea doesn't support graphql
2023-03-16 11:45:29 +08:00
return ghc
2021-05-06 16:02:29 -04:00
}
2020-02-14 00:41:20 -08:00
}
2020-05-04 21:18:13 +02:00
if rc . EventJSON != "" {
2023-02-03 14:35:49 -05:00
err := json . Unmarshal ( [ ] byte ( rc . EventJSON ) , & ghc . Event )
2020-05-04 21:18:13 +02:00
if err != nil {
2022-06-17 17:55:21 +02:00
logger . Errorf ( "Unable to Unmarshal event '%s': %v" , rc . EventJSON , err )
2020-05-04 21:18:13 +02:00
}
2020-02-14 00:41:20 -08:00
}
2020-03-06 14:17:57 -08:00
2023-02-03 14:35:49 -05:00
ghc . SetBaseAndHeadRef ( )
repoPath := rc . Config . Workdir
ghc . SetRepositoryAndOwner ( ctx , rc . Config . GitHubInstance , rc . Config . RemoteName , repoPath )
if ghc . Ref == "" {
ghc . SetRef ( ctx , rc . Config . DefaultBranch , repoPath )
2020-03-06 14:17:57 -08:00
}
2023-02-03 14:35:49 -05:00
if ghc . Sha == "" {
ghc . SetSha ( ctx , repoPath )
2022-05-08 10:23:19 -04:00
}
2023-02-03 14:35:49 -05:00
ghc . SetRefTypeAndName ( )
2023-04-13 15:09:28 +02:00
// defaults
ghc . ServerURL = "https://github.com"
ghc . APIURL = "https://api.github.com"
ghc . GraphQLURL = "https://api.github.com/graphql"
// per GHES
if rc . Config . GitHubInstance != "github.com" {
ghc . ServerURL = fmt . Sprintf ( "https://%s" , rc . Config . GitHubInstance )
ghc . APIURL = fmt . Sprintf ( "https://%s/api/v3" , rc . Config . GitHubInstance )
ghc . GraphQLURL = fmt . Sprintf ( "https://%s/api/graphql" , rc . Config . GitHubInstance )
}
2023-06-20 07:36:10 +00:00
{ // Adapt to Gitea
instance := rc . Config . GitHubInstance
if ! strings . HasPrefix ( instance , "http://" ) &&
! strings . HasPrefix ( instance , "https://" ) {
instance = "https://" + instance
}
ghc . ServerURL = instance
ghc . APIURL = instance + "/api/v1" // the version of Gitea is v1
ghc . GraphQLURL = "" // Gitea doesn't support graphql
}
2023-04-13 15:09:28 +02:00
// allow to be overridden by user
if rc . Config . Env [ "GITHUB_SERVER_URL" ] != "" {
ghc . ServerURL = rc . Config . Env [ "GITHUB_SERVER_URL" ]
}
if rc . Config . Env [ "GITHUB_API_URL" ] != "" {
2023-04-13 16:09:29 +02:00
ghc . APIURL = rc . Config . Env [ "GITHUB_API_URL" ]
2023-04-13 15:09:28 +02:00
}
if rc . Config . Env [ "GITHUB_GRAPHQL_URL" ] != "" {
2023-04-13 16:09:29 +02:00
ghc . GraphQLURL = rc . Config . Env [ "GITHUB_GRAPHQL_URL" ]
2023-04-13 15:09:28 +02:00
}
2020-02-14 00:41:20 -08:00
return ghc
}
2021-12-22 20:52:09 +01:00
func isLocalCheckout ( ghc * model . GithubContext , step * model . Step ) bool {
2021-05-10 08:12:57 -07:00
if step . Type ( ) == model . StepTypeInvalid {
2021-05-04 14:50:35 -07:00
// This will be errored out by the executor later, we need this here to avoid a null panic though
return false
}
2020-03-09 17:45:42 -07:00
if step . Type ( ) != model . StepTypeUsesActionRemote {
return false
}
remoteAction := newRemoteAction ( step . Uses )
2021-05-10 08:12:57 -07:00
if remoteAction == nil {
// IsCheckout() will nil panic if we dont bail out early
return false
}
2020-03-09 17:45:42 -07:00
if ! remoteAction . IsCheckout ( ) {
return false
}
if repository , ok := step . With [ "repository" ] ; ok && repository != ghc . Repository {
return false
}
if repository , ok := step . With [ "ref" ] ; ok && repository != ghc . Ref {
return false
}
return true
}
2025-08-15 04:54:13 +00:00
func nestedMapLookup ( m map [ string ] any , ks ... string ) ( rval any ) {
2020-03-06 14:17:57 -08:00
var ok bool
if len ( ks ) == 0 { // degenerate input
return nil
}
if rval , ok = m [ ks [ 0 ] ] ; ! ok {
return nil
} else if len ( ks ) == 1 { // we've reached the final key
return rval
2025-08-15 04:54:13 +00:00
} else if m , ok = rval . ( map [ string ] any ) ; ! ok {
2020-03-06 14:17:57 -08:00
return nil
}
2025-07-28 12:26:41 +00:00
// 1+ more keys
return nestedMapLookup ( m , ks [ 1 : ] ... )
2020-03-06 14:17:57 -08:00
}
2022-10-06 23:58:16 +02:00
func ( rc * RunContext ) withGithubEnv ( ctx context . Context , github * model . GithubContext , env map [ string ] string ) map [ string ] string {
2025-07-07 06:00:53 +00:00
set := func ( k , v string ) {
for _ , prefix := range [ ] string { "FORGEJO" , "GITHUB" } {
env [ prefix + "_" + k ] = v
}
}
2020-09-28 16:22:42 +01:00
env [ "CI" ] = "true"
2025-07-07 06:00:53 +00:00
set ( "WORKFLOW" , github . Workflow )
2025-08-07 08:02:28 +00:00
set ( "RUN_ATTEMPT" , github . RunAttempt )
2025-07-07 06:00:53 +00:00
set ( "RUN_ID" , github . RunID )
set ( "RUN_NUMBER" , github . RunNumber )
set ( "ACTION" , github . Action )
set ( "ACTION_PATH" , github . ActionPath )
set ( "ACTION_REPOSITORY" , github . ActionRepository )
set ( "ACTION_REF" , github . ActionRef )
set ( "ACTIONS" , "true" )
set ( "ACTOR" , github . Actor )
set ( "REPOSITORY" , github . Repository )
set ( "EVENT_NAME" , github . EventName )
set ( "EVENT_PATH" , github . EventPath )
set ( "WORKSPACE" , github . Workspace )
set ( "SHA" , github . Sha )
set ( "REF" , github . Ref )
set ( "REF_NAME" , github . RefName )
set ( "REF_TYPE" , github . RefType )
set ( "TOKEN" , github . Token )
set ( "JOB" , github . Job )
set ( "REPOSITORY_OWNER" , github . RepositoryOwner )
set ( "RETENTION_DAYS" , github . RetentionDays )
2021-05-06 16:02:29 -04:00
env [ "RUNNER_PERFLOG" ] = github . RunnerPerflog
env [ "RUNNER_TRACKING_ID" ] = github . RunnerTrackingID
2025-07-07 06:00:53 +00:00
set ( "BASE_REF" , github . BaseRef )
set ( "HEAD_REF" , github . HeadRef )
set ( "SERVER_URL" , github . ServerURL )
set ( "API_URL" , github . APIURL )
2023-02-03 14:35:49 -05:00
2025-07-07 06:00:53 +00:00
{ // Adapt to Forgejo
2023-03-16 11:45:29 +08:00
instance := rc . Config . GitHubInstance
if ! strings . HasPrefix ( instance , "http://" ) &&
! strings . HasPrefix ( instance , "https://" ) {
instance = "https://" + instance
2022-11-16 18:00:45 +08:00
}
2025-07-07 06:00:53 +00:00
set ( "SERVER_URL" , instance )
set ( "API_URL" , instance + "/api/v1" )
2021-05-05 18:42:34 +02:00
}
2021-03-18 01:14:08 +01:00
2021-11-10 18:57:22 +01:00
if rc . Config . ArtifactServerPath != "" {
setActionRuntimeVars ( rc , env )
}
2023-11-12 12:40:06 -05:00
for _ , platformName := range rc . runsOnPlatformNames ( ctx ) {
2023-08-09 20:41:12 +08:00
if platformName != "" {
if platformName == "ubuntu-latest" {
// hardcode current ubuntu-latest since we have no way to check that 'on the fly'
env [ "ImageOS" ] = "ubuntu20"
} else {
platformName = strings . SplitN ( strings . Replace ( platformName , ` - ` , ` ` , 1 ) , ` . ` , 2 ) [ 0 ]
env [ "ImageOS" ] = platformName
2021-03-18 01:14:08 +01:00
}
}
}
2020-02-14 00:41:20 -08:00
return env
}
2020-03-09 17:45:42 -07:00
2021-11-10 18:57:22 +01:00
func setActionRuntimeVars ( rc * RunContext , env map [ string ] string ) {
actionsRuntimeURL := os . Getenv ( "ACTIONS_RUNTIME_URL" )
if actionsRuntimeURL == "" {
2023-01-16 15:12:20 +01:00
actionsRuntimeURL = fmt . Sprintf ( "http://%s:%s/" , rc . Config . ArtifactServerAddr , rc . Config . ArtifactServerPort )
2021-11-10 18:57:22 +01:00
}
env [ "ACTIONS_RUNTIME_URL" ] = actionsRuntimeURL
actionsRuntimeToken := os . Getenv ( "ACTIONS_RUNTIME_TOKEN" )
if actionsRuntimeToken == "" {
actionsRuntimeToken = "token"
}
env [ "ACTIONS_RUNTIME_TOKEN" ] = actionsRuntimeToken
}
2023-07-10 17:12:12 -07:00
func ( rc * RunContext ) handleCredentials ( ctx context . Context ) ( string , string , error ) {
2021-11-27 19:05:56 +01:00
// TODO: remove below 2 lines when we can release act with breaking changes
2023-07-10 17:12:12 -07:00
username := rc . Config . Secrets [ "DOCKER_USERNAME" ]
password := rc . Config . Secrets [ "DOCKER_PASSWORD" ]
2021-11-27 19:05:56 +01:00
container := rc . Run . Job ( ) . Container ( )
if container == nil || container . Credentials == nil {
2023-07-10 17:12:12 -07:00
return username , password , nil
2021-11-27 19:05:56 +01:00
}
if container . Credentials != nil && len ( container . Credentials ) != 2 {
2023-07-10 17:12:12 -07:00
err := fmt . Errorf ( "invalid property count for key 'credentials:'" )
return "" , "" , err
2021-11-27 19:05:56 +01:00
}
2022-06-17 17:55:21 +02:00
ee := rc . NewExpressionEvaluator ( ctx )
if username = ee . Interpolate ( ctx , container . Credentials [ "username" ] ) ; username == "" {
2023-07-10 17:12:12 -07:00
err := fmt . Errorf ( "failed to interpolate container.credentials.username" )
return "" , "" , err
2021-11-27 19:05:56 +01:00
}
2022-06-17 17:55:21 +02:00
if password = ee . Interpolate ( ctx , container . Credentials [ "password" ] ) ; password == "" {
2023-07-10 17:12:12 -07:00
err := fmt . Errorf ( "failed to interpolate container.credentials.password" )
return "" , "" , err
2021-11-27 19:05:56 +01:00
}
if container . Credentials [ "username" ] == "" || container . Credentials [ "password" ] == "" {
2023-07-10 17:12:12 -07:00
err := fmt . Errorf ( "container.credentials cannot be empty" )
return "" , "" , err
2021-11-27 19:05:56 +01:00
}
2023-07-10 17:12:12 -07:00
return username , password , nil
2021-11-27 19:05:56 +01:00
}
2023-04-25 14:45:39 +08:00
func ( rc * RunContext ) handleServiceCredentials ( ctx context . Context , creds map [ string ] string ) ( username , password string , err error ) {
if creds == nil {
2025-07-28 12:26:41 +00:00
return username , password , err
2023-04-25 14:45:39 +08:00
}
if len ( creds ) != 2 {
err = fmt . Errorf ( "invalid property count for key 'credentials:'" )
2025-07-28 12:26:41 +00:00
return username , password , err
2023-04-25 14:45:39 +08:00
}
ee := rc . NewExpressionEvaluator ( ctx )
if username = ee . Interpolate ( ctx , creds [ "username" ] ) ; username == "" {
err = fmt . Errorf ( "failed to interpolate credentials.username" )
2025-07-28 12:26:41 +00:00
return username , password , err
2023-04-25 14:45:39 +08:00
}
if password = ee . Interpolate ( ctx , creds [ "password" ] ) ; password == "" {
err = fmt . Errorf ( "failed to interpolate credentials.password" )
2025-07-28 12:26:41 +00:00
return username , password , err
2023-04-25 14:45:39 +08:00
}
2025-07-28 12:26:41 +00:00
return username , password , err
2023-04-25 14:45:39 +08:00
}
2023-06-05 09:21:59 +00:00
// GetServiceBindsAndMounts returns the binds and mounts for the service container, resolving paths as appopriate
func ( rc * RunContext ) GetServiceBindsAndMounts ( svcVolumes [ ] string ) ( [ ] string , map [ string ] string ) {
2025-08-15 09:11:15 +00:00
containerDaemonSocket := rc . Config . GetContainerDaemonSocket ( )
2023-06-05 09:21:59 +00:00
binds := [ ] string { }
2025-08-15 09:11:15 +00:00
if containerDaemonSocket != "-" {
daemonPath := getDockerDaemonSocketMountPath ( containerDaemonSocket )
2023-06-05 09:21:59 +00:00
binds = append ( binds , fmt . Sprintf ( "%s:%s" , daemonPath , "/var/run/docker.sock" ) )
}
mounts := map [ string ] string { }
for _ , v := range svcVolumes {
if ! strings . Contains ( v , ":" ) || filepath . IsAbs ( v ) {
// Bind anonymous volume or host file.
binds = append ( binds , v )
} else {
// Mount existing volume.
paths := strings . SplitN ( v , ":" , 2 )
mounts [ paths [ 0 ] ] = paths [ 1 ]
}
}
return binds , mounts
}