Experimenting with setting up Bazek toolchains, when the tools are mirrored into an AWS S3 bucket.
This builds off previous work done in jrbeverly/bazel-external-toolchain-rule for creating toolchains from files.
- Implementation of
s3_archiveuses that same model as
repository_ruledoes not use
toolchainslike rules, meaning bootstrap rules must be downloaded by
- Repository rules can convert labels to paths using the
To get a fully managed system that doesn’t require tools to be installed on the local system, would likely require a bootstrap rule using a tool that is publicly available. This way it could be downloaded using the
download_and_extract rule of Bazel, then combined with the other rules for perform the act of downloading binaries. The process as follows:
- Download & configure the
cloudiotool for downloading binaries from a cloud source (AWS S3/Azure RM/GCP/etc)
- Make this tool available to the other repository rules (aspect? label? ?)
- Repository rule uses this binary to run the download commands
This may have an additional benefit in that the
cloudio tool could behave in a content-addressable manner, removing the need for URLs (primary/mirrors/etc) for tools. Instead leaving it up to the
cloudio to download the toolchain from the appropriately known registries.
- Using something like an
awsrcfile (aim to be similar to
netrc) to determine which AWS Profile authentication to use for buckets (is this worthwhile? - Maybe - Our default
AWS_PROFILEmay not always have artifact access - think isolated AWS accounts like ‘malware’)
- Requires a sha256sum checker, which is only really available on linux (MacOS can install, but default is
shasum), and Windows is PowerShell.
- AWS CLI is working on parts of the environment like
AWS_PROFILE(which may run tools through