commits
- rust.yml: cargo build/test/fmt/clippy on push and pull_request
- nix.yml: flake-based fmt/treefmt/deny/clippy/doc/test/build/static-musl
- security-audit.yml: manual cargo-deny advisories check
Spindle does not yet support scheduled triggers or matrix builds, so the
weekly security audit becomes manual and the architecture matrix from the
GitHub nix.yml is dropped (single arch on the spindle).
Workflows specific to GitHub services (codeberg mirror, snap publish,
release-plz) are intentionally not translated.
Assisted-by: Claude Opus 4.7 (code generation)
- actions/checkout: v4 -> v6.0.2 (codeberg-mirror, rust, security-audit, nix)
- actions/upload-artifact: v7.0.0 -> v7.0.1
- codecov/codecov-action: v4 -> v6.0.0 (rename file -> files input)
- DeterminateSystems/magic-nix-cache-action: v7 -> v13
- DeterminateSystems/nix-installer-action: v12 -> v22
- release-plz/action: v0.5.127 -> v0.5.128
- Swatinem/rust-cache: v2 -> v2.9.1
Resolves Node 20 deprecation warnings for actions that have shipped
Node 24 releases. snapcore/action-build and snapcore/action-publish
remain on node20 upstream; magic-nix-cache-action also stays on node20
(no Node 24 release yet). Validated locally with act on rust.yml.
chore: release v0.4.0
nix.yml: split the existing build job into lint (x86_64 only — fmt,
treefmt, deny, clippy, doc) and build (matrix over x86_64 and aarch64
— test, cmprss, cmprss-static, flake check). Architecture-independent
checks no longer run twice.
publish.yml: add a binaries job that builds cmprss-static via the flake
on x86_64 and aarch64 runners and attaches the resulting tarballs (with
SHA256 sidecars) to the GitHub Release. Runs only on release/dispatch
events — every-push validation is now covered by nix.yml.
Adds a new --append flag and Action::Append variant that lets users add
new entries to an existing tar or zip archive without rebuilding it.
Tar uses the tar crate's entry position metadata to locate the offset
just past the last data block, truncates the trailing end-of-archive
zero blocks, and resumes writing entries with Builder. Zip delegates to
ZipWriter::new_append. Pipeline::append passes through for single-stage
wrappers so positional-path inference (e.g. 'cmprss --append a.tar ...')
still works, and bails with a clear message for compound pipelines like
tar.gz, which would require decompress-then-recompress.
Stream codecs (gzip, xz, bzip2, ...) inherit the default trait impl that
bails explaining only container formats support --append.
Pipelines like tar.gz produced a spinner (pos and total ticking up
together) because the outer stage's input came from an in-memory pipe
with unknown size. Route the bar to the innermost stage — the only one
that sees real input bytes — and suppress it on outer stages by
threading a pipeline-inner flag out of open_input.
Replaces the per-backend fn clone_boxed stub with a CompressorClone
supertrait providing a blanket impl for any Compressor + Clone + 'static.
Clone itself can't be a Compressor supertrait (breaks dyn object safety),
but this keeps the same clone_boxed call sites while deleting ~50 lines
of identical boilerplate across 12 backends. Pipeline gets a manual
Clone impl since Vec<Box<dyn Compressor>> isn't auto-derivable.
Replace unwrap() chains in guess_from_filenames() and
default_compressed_filename() with fallible .and_then() + .ok_or_else()
chains. Non-UTF8 filenames are valid on Linux/macOS and should not
cause a crash.
Expands bin/test.sh so `just test full` now exercises every supported format
against its canonical CLI: lzma (xz-utils), brotli, tar, zip, and the four
tar.<codec> compound pipelines (tar.gz/xz/bz2/zst). This catches any drift in
archive-format compatibility the Rust unit tests cannot see — in particular,
that cmprss-produced pipeline archives are readable by the stock tar CLI and
vice versa. Adds brotli/zip/unzip to the devShell so all tools are available.
Pipeline previously rebuilt each stage from its name (e.g. "gzip") using the
default-level constructor, so user-supplied settings like --level 9 were
silently dropped inside a compound archive. Replace the name round-trip with a
trait-level clone_boxed so owned, fully-configured stages reach worker threads.
Add a List action alongside Compress/Extract, wire --list / -l to
short-circuit through get_job (no output slot needed), and introduce
a default Compressor::list that bails for stream codecs.
Real implementations land on Tar (iterates Archive entries), Zip
(iterates ZipArchive::file_names — tempfiles non-seekable input),
and Pipeline (reuses the multi-stage pipe plumbing from extract:
outer layers decompress through an in-memory pipe into the innermost
container format, which lists).
Pipelines such as tar.gz therefore list correctly; single stream
codecs like gzip correctly fail with a specific message.
Covered by four integration tests in tests/list.rs.
Add hidden `cmprss completions <shell>` and `cmprss manpage`
subcommands that emit to stdout. Packagers can pipe the output into
standard system locations during install:
cmprss completions bash > /usr/share/bash-completion/completions/cmprss
cmprss completions zsh > /usr/share/zsh/site-functions/_cmprss
cmprss manpage > /usr/share/man/man1/cmprss.1
Built on clap_complete and clap_mangen so the output tracks the real
clap derive — no hand-maintained scripts to drift.
Previously get_job hard-bailed on any existing -o target, and a trailing
existing file in the positional io_list silently fell through into the
input list (a long-standing footgun). --force now:
* relaxes the bail on explicit -o targets
* takes a trailing existing file as the output (overwrite) instead of
pulling it into the input list
Three integration tests in tests/force.rs cover refusal-without-force,
overwrite-via-`-o`, and overwrite-via-positional-output.
Action now cleanly models only the two real operations (Compress,
Extract); 'not yet resolved' is expressed as Option::None during
inference inside get_job. guess_from_filenames returns
Result<(Box<dyn Compressor>, Action)> and bails directly in the
previously-ambiguous cases instead of leaking Unknown upward.
Knock-on: main.rs's dispatch no longer needs its catch-all 'Unknown
action requested' bail — exhaustive matching on the two-variant enum.
Replace the tangled pair of nested matches with three explicit branches
keyed on how the output is determined:
1. Explicit output path (-o or trailing io_list item)
2. Stdout pipe
3. No output — invent a filename from the resolved compressor+action
Extract finalize_with_output, finalize_without_output, and
fill_missing_from_io as focused helpers. Each is ~linear; the previous
interleaving of output construction with as-side-effect action/compressor
updates is gone.
Behavior preserved; tests stay green.
Pull four leaf helpers out of get_job so its top now reads as a
phased pipeline:
let action = action_from_flags(args);
let (inputs, output) = partition_paths(args, action)?;
let input = resolve_input(inputs, args)?;
// ...resolve output + compressor + action
No behavior change; the remaining output/action inference logic is
unchanged and will be attacked in a follow-up.
Every backend with a compression level had the same 3-line pattern in
its new(): instantiate validator, pull args.level_args.level.level,
clamp. Fold that into LevelArgs::resolve(&validator) so the six
level-aware backends (gzip, xz, bzip2, zstd, brotli, lzma) get cleaner
single-expression constructors.
main.rs was 683 lines — 95% of it was inference heuristics (get_job,
guess_from_filenames, get_compressor_from_filename, expand_shortcut_ext,
get_input_filename, get_path), Action, Job, and their unit tests. Move
all of that to a new job module, leaving main.rs at 96 lines containing
only the CmprssArgs/Format CLI shell and the thin command dispatch.
Behavior is unchanged; this is a pure reorganization commit.
FILE/DIRECTORY → File/Directory. Matches Rust's enum variant naming
convention and clippy's screaming_snake_case_ctor lint.
Every single-stream backend (gzip, xz, bzip2, zstd, lz4, brotli, snappy,
lzma) had the same ~40 lines of boilerplate per compress/extract for
resolving CmprssInput into a Read, resolving CmprssOutput into a Write,
and rejecting directory inputs/outputs. Consolidate that into a new
stream module and have each backend call open_input, open_output, and
guard_file_output.
Net -323 lines across the eight backends. Also normalizes the
directory-rejection error messages and extends the directory-input
guard to xz and bzip2, which previously only surfaced the lower-level
'Is a directory' error from File::open.
GITHUB_TOKEN events don't trigger downstream workflows, so the
release-plz workflow now dispatches publish.yml directly after
creating a release.
- rust.yml: cargo build/test/fmt/clippy on push and pull_request
- nix.yml: flake-based fmt/treefmt/deny/clippy/doc/test/build/static-musl
- security-audit.yml: manual cargo-deny advisories check
Spindle does not yet support scheduled triggers or matrix builds, so the
weekly security audit becomes manual and the architecture matrix from the
GitHub nix.yml is dropped (single arch on the spindle).
Workflows specific to GitHub services (codeberg mirror, snap publish,
release-plz) are intentionally not translated.
Assisted-by: Claude Opus 4.7 (code generation)
- actions/checkout: v4 -> v6.0.2 (codeberg-mirror, rust, security-audit, nix)
- actions/upload-artifact: v7.0.0 -> v7.0.1
- codecov/codecov-action: v4 -> v6.0.0 (rename file -> files input)
- DeterminateSystems/magic-nix-cache-action: v7 -> v13
- DeterminateSystems/nix-installer-action: v12 -> v22
- release-plz/action: v0.5.127 -> v0.5.128
- Swatinem/rust-cache: v2 -> v2.9.1
Resolves Node 20 deprecation warnings for actions that have shipped
Node 24 releases. snapcore/action-build and snapcore/action-publish
remain on node20 upstream; magic-nix-cache-action also stays on node20
(no Node 24 release yet). Validated locally with act on rust.yml.
nix.yml: split the existing build job into lint (x86_64 only — fmt,
treefmt, deny, clippy, doc) and build (matrix over x86_64 and aarch64
— test, cmprss, cmprss-static, flake check). Architecture-independent
checks no longer run twice.
publish.yml: add a binaries job that builds cmprss-static via the flake
on x86_64 and aarch64 runners and attaches the resulting tarballs (with
SHA256 sidecars) to the GitHub Release. Runs only on release/dispatch
events — every-push validation is now covered by nix.yml.
Adds a new --append flag and Action::Append variant that lets users add
new entries to an existing tar or zip archive without rebuilding it.
Tar uses the tar crate's entry position metadata to locate the offset
just past the last data block, truncates the trailing end-of-archive
zero blocks, and resumes writing entries with Builder. Zip delegates to
ZipWriter::new_append. Pipeline::append passes through for single-stage
wrappers so positional-path inference (e.g. 'cmprss --append a.tar ...')
still works, and bails with a clear message for compound pipelines like
tar.gz, which would require decompress-then-recompress.
Stream codecs (gzip, xz, bzip2, ...) inherit the default trait impl that
bails explaining only container formats support --append.
Pipelines like tar.gz produced a spinner (pos and total ticking up
together) because the outer stage's input came from an in-memory pipe
with unknown size. Route the bar to the innermost stage — the only one
that sees real input bytes — and suppress it on outer stages by
threading a pipeline-inner flag out of open_input.
Replaces the per-backend fn clone_boxed stub with a CompressorClone
supertrait providing a blanket impl for any Compressor + Clone + 'static.
Clone itself can't be a Compressor supertrait (breaks dyn object safety),
but this keeps the same clone_boxed call sites while deleting ~50 lines
of identical boilerplate across 12 backends. Pipeline gets a manual
Clone impl since Vec<Box<dyn Compressor>> isn't auto-derivable.
Expands bin/test.sh so `just test full` now exercises every supported format
against its canonical CLI: lzma (xz-utils), brotli, tar, zip, and the four
tar.<codec> compound pipelines (tar.gz/xz/bz2/zst). This catches any drift in
archive-format compatibility the Rust unit tests cannot see — in particular,
that cmprss-produced pipeline archives are readable by the stock tar CLI and
vice versa. Adds brotli/zip/unzip to the devShell so all tools are available.
Pipeline previously rebuilt each stage from its name (e.g. "gzip") using the
default-level constructor, so user-supplied settings like --level 9 were
silently dropped inside a compound archive. Replace the name round-trip with a
trait-level clone_boxed so owned, fully-configured stages reach worker threads.
Add a List action alongside Compress/Extract, wire --list / -l to
short-circuit through get_job (no output slot needed), and introduce
a default Compressor::list that bails for stream codecs.
Real implementations land on Tar (iterates Archive entries), Zip
(iterates ZipArchive::file_names — tempfiles non-seekable input),
and Pipeline (reuses the multi-stage pipe plumbing from extract:
outer layers decompress through an in-memory pipe into the innermost
container format, which lists).
Pipelines such as tar.gz therefore list correctly; single stream
codecs like gzip correctly fail with a specific message.
Covered by four integration tests in tests/list.rs.
Add hidden `cmprss completions <shell>` and `cmprss manpage`
subcommands that emit to stdout. Packagers can pipe the output into
standard system locations during install:
cmprss completions bash > /usr/share/bash-completion/completions/cmprss
cmprss completions zsh > /usr/share/zsh/site-functions/_cmprss
cmprss manpage > /usr/share/man/man1/cmprss.1
Built on clap_complete and clap_mangen so the output tracks the real
clap derive — no hand-maintained scripts to drift.
Previously get_job hard-bailed on any existing -o target, and a trailing
existing file in the positional io_list silently fell through into the
input list (a long-standing footgun). --force now:
* relaxes the bail on explicit -o targets
* takes a trailing existing file as the output (overwrite) instead of
pulling it into the input list
Three integration tests in tests/force.rs cover refusal-without-force,
overwrite-via-`-o`, and overwrite-via-positional-output.
Action now cleanly models only the two real operations (Compress,
Extract); 'not yet resolved' is expressed as Option::None during
inference inside get_job. guess_from_filenames returns
Result<(Box<dyn Compressor>, Action)> and bails directly in the
previously-ambiguous cases instead of leaking Unknown upward.
Knock-on: main.rs's dispatch no longer needs its catch-all 'Unknown
action requested' bail — exhaustive matching on the two-variant enum.
Replace the tangled pair of nested matches with three explicit branches
keyed on how the output is determined:
1. Explicit output path (-o or trailing io_list item)
2. Stdout pipe
3. No output — invent a filename from the resolved compressor+action
Extract finalize_with_output, finalize_without_output, and
fill_missing_from_io as focused helpers. Each is ~linear; the previous
interleaving of output construction with as-side-effect action/compressor
updates is gone.
Behavior preserved; tests stay green.
Pull four leaf helpers out of get_job so its top now reads as a
phased pipeline:
let action = action_from_flags(args);
let (inputs, output) = partition_paths(args, action)?;
let input = resolve_input(inputs, args)?;
// ...resolve output + compressor + action
No behavior change; the remaining output/action inference logic is
unchanged and will be attacked in a follow-up.
Every backend with a compression level had the same 3-line pattern in
its new(): instantiate validator, pull args.level_args.level.level,
clamp. Fold that into LevelArgs::resolve(&validator) so the six
level-aware backends (gzip, xz, bzip2, zstd, brotli, lzma) get cleaner
single-expression constructors.
main.rs was 683 lines — 95% of it was inference heuristics (get_job,
guess_from_filenames, get_compressor_from_filename, expand_shortcut_ext,
get_input_filename, get_path), Action, Job, and their unit tests. Move
all of that to a new job module, leaving main.rs at 96 lines containing
only the CmprssArgs/Format CLI shell and the thin command dispatch.
Behavior is unchanged; this is a pure reorganization commit.
Every single-stream backend (gzip, xz, bzip2, zstd, lz4, brotli, snappy,
lzma) had the same ~40 lines of boilerplate per compress/extract for
resolving CmprssInput into a Read, resolving CmprssOutput into a Write,
and rejecting directory inputs/outputs. Consolidate that into a new
stream module and have each backend call open_input, open_output, and
guard_file_output.
Net -323 lines across the eight backends. Also normalizes the
directory-rejection error messages and extends the directory-input
guard to xz and bzip2, which previously only surfaced the lower-level
'Is a directory' error from File::open.