Skip to content
LexBuild
On this page

This is the complete reference for the lexbuild command-line tool. Every command, flag, and option is documented here with tables and examples. Install via npm install -g @lexbuild/cli or use npx @lexbuild/cli.

For a quick-start guide, see the CLI commands overview.

Commands

The CLI uses a {action}-{source} naming pattern for download and convert commands, plus utility commands for inspecting data sources.

CommandDescription
download-uscDownload U.S. Code XML from OLRC
convert-uscConvert U.S. Code XML to Markdown
list-release-pointsList available OLRC release points for the U.S. Code
download-ecfrDownload eCFR XML from ecfr.gov or govinfo
convert-ecfrConvert eCFR XML to Markdown
download-frDownload Federal Register XML and metadata from federalregister.gov
convert-frConvert Federal Register XML to Markdown
enrich-frEnrich FR Markdown frontmatter with API metadata

Bare download and convert commands (without a source suffix) display an error prompting you to specify a source.

Title Specification Format

The --titles option accepts a flexible format shared across all commands:

FormatExampleResult
Single number--titles 1Title 1
Comma-separated--titles 1,3,8,11Titles 1, 3, 8, 11
Range--titles 1-5Titles 1 through 5
Mixed--titles 1-5,8,11Titles 1 through 5, plus 8 and 11

Valid title numbers: 1—54 for USC, 1—50 for eCFR. Duplicates are removed and results are sorted ascending.


download-usc

Download U.S. Code XML from the Office of the Law Revision Counsel (OLRC).

lexbuild download-usc [options]

Options

OptionDefaultDescription
-o, --output <dir>./downloads/usc/xmlDownload directory
--titles <spec>Title(s) to download (see format above)
--allfalseDownload all 54 titles as a single bulk zip
--release-point <id>auto-detectedOLRC release point identifier

You must provide either --titles or --all.

The latest release point is auto-detected by scraping the OLRC download page. If auto-detection fails, a hardcoded fallback is used. The --release-point flag pins a specific release point and skips auto-detection.

Examples

# Download a single title
lexbuild download-usc --titles 1

# Download specific titles
lexbuild download-usc --titles 1-5,8,11

# Download all 54 titles
lexbuild download-usc --all

# Custom output directory
lexbuild download-usc --all -o ./my-xml

# Pin a specific release point
lexbuild download-usc --all --release-point 119-73not60

list-release-points

List available OLRC release points for the U.S. Code. This command fetches the current (latest) release point and the full history of prior releases, then displays them in a table with dates and affected titles.

lexbuild list-release-points [options]

Options

OptionDefaultDescription
-n, --limit <count>20Maximum number of release points to show (0 = all)

Examples

# Show the 20 most recent release points
lexbuild list-release-points

# Show the 5 most recent
lexbuild list-release-points -n 5

# Show all available release points
lexbuild list-release-points -n 0

Use the release point ID from the output with download-usc:

lexbuild download-usc --all --release-point 119-72not60

convert-usc

Convert U.S. Code XML files to Markdown.

lexbuild convert-usc [input] [options]

Options

OptionDefaultDescription
-o, --output <dir>./outputOutput directory
--titles <spec>Title(s) to convert (see format above)
--allfalseConvert all downloaded titles found in --input-dir
-i, --input-dir <dir>./downloads/usc/xmlDirectory containing USC XML files
-g, --granularity <level>sectionOutput granularity: section, chapter, or title
--link-style <style>plaintextLink style: plaintext, relative, or canonical
--include-source-creditstrueInclude source credit annotations
--no-include-source-creditsExclude source credit annotations
--include-notestrueInclude all notes
--no-include-notesExclude all notes
--include-editorial-notesfalseInclude editorial notes only
--include-statutory-notesfalseInclude statutory notes only
--include-amendmentsfalseInclude amendment history notes only
--dry-runfalseParse and report structure without writing files
-v, --verbosefalsePrint detailed file output

Input Modes

You must specify exactly one of three mutually exclusive input modes:

ModeUsageDescription
Positional argumentlexbuild convert-usc ./path/to/usc01.xmlConvert a single XML file
--titleslexbuild convert-usc --titles 1-5Convert titles by number from --input-dir
--alllexbuild convert-usc --allDiscover and convert all titles in --input-dir

Granularity

Controls how many Markdown files are produced per title.

LevelFlagOutput
Section (default)-g sectionOne .md file per section
Chapter-g chapterOne .md file per chapter, with sections inlined
Title-g titleOne .md file per title, with the entire hierarchy inlined

Controls how cross-references are rendered in the Markdown output.

StyleFlagBehavior
Plaintext (default)--link-style plaintextCitations rendered as plain text, no links
Relative--link-style relativeRelative file path links within the output corpus
Canonical--link-style canonicalFull URLs to the source website (uscode.house.gov)

Notes Filtering

By default, all notes (editorial, statutory, amendments) are included alongside the core legal text and source credits.

  • --no-include-notes excludes all notes.
  • Selective flags (--include-editorial-notes, --include-statutory-notes, --include-amendments) can be combined to include only specific note categories.
  • When any selective flag is set, the broad --include-notes flag is automatically disabled to prevent conflicts.

Examples

# Convert a single file
lexbuild convert-usc ./downloads/usc/xml/usc01.xml -o ./output

# Convert specific titles
lexbuild convert-usc --titles 1-5 -o ./output

# Convert all titles
lexbuild convert-usc --all -o ./output

# Chapter granularity with relative links
lexbuild convert-usc --titles 26 -g chapter --link-style relative -o ./output

# Title granularity (one file per title)
lexbuild convert-usc --titles 1 -g title -o ./output

# Only editorial notes
lexbuild convert-usc --titles 1 --include-editorial-notes -o ./output

# No notes at all
lexbuild convert-usc --titles 1 --no-include-notes -o ./output

# Dry run (parse and report, no files written)
lexbuild convert-usc --all --dry-run

# Verbose output (list every file written)
lexbuild convert-usc --titles 1 -v -o ./output

download-ecfr

Download eCFR (Electronic Code of Federal Regulations) XML.

lexbuild download-ecfr [options]

Options

OptionDefaultDescription
-o, --output <dir>./downloads/ecfr/xmlDownload directory
--titles <spec>Title(s) to download (see format above)
--allfalseDownload all 50 eCFR titles
--source <source>ecfr-apiDownload source: ecfr-api or govinfo
--date <YYYY-MM-DD>todayPoint-in-time date (ecfr-api source only)

You must provide either --titles or --all.

Download Sources

SourceEndpointUpdate Frequency
ecfr-api (default)ecfr.gov versioner APIDaily
govinfogovinfo.gov bulk XMLIrregular (can lag months)

No API key is required for either source. Title 35 (Panama Canal) is reserved and silently skipped during --all downloads.

The --date option is only valid with ecfr-api. When omitted, the currency date is auto-detected from the eCFR API. If an import is in progress on the server, the downloader automatically falls back to the previous day’s data.

Examples

# Download specific titles from the eCFR API (default)
lexbuild download-ecfr --titles 1,17

# Download all 50 titles
lexbuild download-ecfr --all

# Download from govinfo bulk data
lexbuild download-ecfr --all --source govinfo

# Point-in-time download (specific date)
lexbuild download-ecfr --titles 17 --date 2025-01-01

# Custom output directory
lexbuild download-ecfr --all -o ./my-ecfr-xml

convert-ecfr

Convert eCFR XML files to Markdown.

lexbuild convert-ecfr [input] [options]

Options

OptionDefaultDescription
-o, --output <dir>./outputOutput directory
--titles <spec>Title(s) to convert (see format above)
--allfalseConvert all downloaded eCFR titles found in --input-dir
-i, --input-dir <dir>./downloads/ecfr/xmlDirectory containing eCFR XML files
-g, --granularity <level>sectionOutput granularity: section, part, chapter, or title
--link-style <style>plaintextLink style: plaintext, relative, or canonical
--include-source-creditstrueAccepted but currently a no-op for eCFR
--no-include-source-creditsAccepted but currently a no-op for eCFR
--include-notestrueInclude all notes
--no-include-notesExclude all notes
--include-editorial-notesfalseInclude editorial notes only
--include-statutory-notesfalseInclude statutory/regulatory notes only
--include-amendmentsfalseInclude amendment history notes only
--dry-runfalseParse and report structure without writing files
-v, --verbosefalsePrint detailed file output
--currency-date <YYYY-MM-DD>todayCurrency date for frontmatter (from eCFR API metadata)

Input Modes

You must specify exactly one of three mutually exclusive modes:

ModeUsageDescription
Positional argumentlexbuild convert-ecfr ./path/to/ECFR-title1.xmlConvert a single XML file
--titleslexbuild convert-ecfr --titles 1-5Convert titles by number from --input-dir
--alllexbuild convert-ecfr --allDiscover and convert all titles in --input-dir

Granularity

The eCFR converter supports an additional part level compared to the USC converter.

LevelFlagOutput
Section (default)-g sectionOne .md file per section
Part-g partOne .md file per part, with sections inlined
Chapter-g chapterOne .md file per chapter, with parts and sections inlined
Title-g titleOne .md file per title, with the entire hierarchy inlined
StyleFlagBehavior
Plaintext (default)--link-style plaintextCitations rendered as plain text, no links
Relative--link-style relativeRelative file path links within the output corpus
Canonical--link-style canonicalFull URLs to the source website (ecfr.gov)

Notes Filtering

Identical behavior to convert-usc. See the notes filtering section above.

Examples

# Convert a single file
lexbuild convert-ecfr ./downloads/ecfr/xml/ECFR-title1.xml -o ./output

# Convert specific titles
lexbuild convert-ecfr --titles 17 -o ./output

# Convert all titles
lexbuild convert-ecfr --all -o ./output

# Part granularity (one file per CFR part)
lexbuild convert-ecfr --titles 17 -g part -o ./output

# Title granularity
lexbuild convert-ecfr --titles 1 -g title -o ./output

# Dry run
lexbuild convert-ecfr --all --dry-run

download-fr

Download Federal Register documents (XML full text and JSON metadata) from the FederalRegister.gov API.

lexbuild download-fr [options]

Options

OptionDefaultDescription
-o, --output <dir>./downloads/frDownload directory
--from <YYYY-MM-DD>Start date (inclusive)
--to <YYYY-MM-DD>todayEnd date (inclusive)
--types <types>allDocument types: rule, proposed_rule, notice, presidential_document
--recent <days>Download last N days (convenience shorthand)
--document <number>Download a single document by number
--limit <n>Maximum number of documents (for testing)

You must provide one of --from, --recent, or --document.

Unlike USC and eCFR, the Federal Register is organized by date rather than by title. The downloader fetches both a .json metadata sidecar and a .xml full text file per document. Large date ranges are automatically chunked by month to stay under the API’s 10,000-result cap per query.

No API key is required. Documents before January 2000 have JSON metadata but no XML full text and are skipped during download.

Examples

# Download last 30 days of documents
lexbuild download-fr --recent 30

# Download a specific date range
lexbuild download-fr --from 2026-01-01 --to 2026-03-31

# Download only final rules
lexbuild download-fr --from 2026-01-01 --types rule

# Download rules and proposed rules
lexbuild download-fr --from 2026-01-01 --types rule,proposed_rule

# Download a single document by number
lexbuild download-fr --document 2026-06029

# Limit download for testing
lexbuild download-fr --from 2026-03-01 --limit 10

convert-fr

Convert Federal Register XML files to Markdown.

lexbuild convert-fr [input] [options]

Options

OptionDefaultDescription
-o, --output <dir>./outputOutput directory
-i, --input-dir <dir>./downloads/frDirectory containing downloaded FR files
--allfalseConvert all downloaded documents found in --input-dir
--from <YYYY-MM-DD>Filter: start date
--to <YYYY-MM-DD>Filter: end date
--types <types>allFilter: document types
--link-style <style>plaintextLink style: plaintext, relative, or canonical
--dry-runfalseParse and report without writing files
-v, --verbosefalsePrint detailed file output

Input Modes

ModeUsageDescription
Positional argumentlexbuild convert-fr ./path/to/doc.xmlConvert a single XML file
--alllexbuild convert-fr --allDiscover and convert all XML files in --input-dir
--fromlexbuild convert-fr --from 2026-01-01Filter by date range within --input-dir

There is no --granularity option because FR documents are already atomic (one file per document). There is no --titles option because the Federal Register is date-based, not title-based.

When a .json sidecar file exists alongside the .xml (same basename), frontmatter is enriched with structured agency, CFR reference, docket, and date information from the API.

Examples

# Convert all downloaded documents
lexbuild convert-fr --all

# Convert a specific date range
lexbuild convert-fr --from 2026-01-01 --to 2026-03-31

# Convert only rules
lexbuild convert-fr --all --types rule

# Convert a single file
lexbuild convert-fr ./downloads/fr/2026/03/2026-06029.xml -o ./output

# Dry run
lexbuild convert-fr --all --dry-run

enrich-fr

Enrich existing Federal Register Markdown files with metadata from the FederalRegister.gov API listing endpoint. This command is only needed for files originally converted from govinfo bulk XML (--source govinfo), which lacks the JSON metadata sidecar. When using the default fr-api download source, the converter automatically uses the downloaded JSON sidecar to populate these fields, making the enrich step unnecessary.

The enricher adds fields like fr_citation, agencies, cfr_references, docket_ids, effective_date, comments_close_date, and fr_action that are only available from the API’s JSON metadata.

lexbuild enrich-fr [options]

Options

OptionDefaultDescription
-o, --output <dir>./outputOutput directory containing FR .md files
--from <YYYY-MM-DD>Start date (inclusive)
--to <YYYY-MM-DD>todayEnd date (inclusive)
--recent <days>Enrich last N days (convenience shorthand)
--forcefalseOverwrite files that already have fr_citation

You must provide either --from or --recent.

The enricher paginates through the API listing endpoint (200 documents per page), matches each API document to its .md file by document number and publication date, and patches the YAML frontmatter. The Markdown body is preserved exactly as-is; no XML re-parsing or Markdown re-rendering occurs.

Files that already have fr_citation in their frontmatter are considered already enriched and skipped unless --force is used. This makes re-runs safe and incremental.

Examples

# Enrich all govinfo-backfilled documents (2000 onward)
lexbuild enrich-fr --from 2000-01-01

# Enrich a specific date range
lexbuild enrich-fr --from 2020-01-01 --to 2025-12-31

# Enrich last 30 days
lexbuild enrich-fr --recent 30

# Force re-enrichment of already-enriched files
lexbuild enrich-fr --from 2026-01-01 --force

# Custom output directory
lexbuild enrich-fr --from 2000-01-01 -o ./my-output

Combined Workflows

Full pipeline examples for downloading and converting from all sources.

# Full USC pipeline
lexbuild download-usc --all
lexbuild convert-usc --all -o ./output

# Full eCFR pipeline
lexbuild download-ecfr --all
lexbuild convert-ecfr --all -o ./output

# Specific titles from both sources
lexbuild download-usc --titles 1-5
lexbuild convert-usc --titles 1-5 -o ./output
lexbuild download-ecfr --titles 1-5
lexbuild convert-ecfr --titles 1-5 -o ./output

# Full FR pipeline using fr-api (default — no enrich step needed)
lexbuild download-fr --recent 30
lexbuild convert-fr --all -o ./output

# Full FR pipeline using govinfo bulk (requires enrich step)
lexbuild download-fr --source govinfo --from 2000-01-01 --to 2025-12-31
lexbuild convert-fr --all -o ./output
lexbuild enrich-fr --from 2000-01-01 -o ./output

# Convert all sources with relative cross-reference links
lexbuild convert-usc --all --link-style relative -o ./output
lexbuild convert-ecfr --all --link-style relative -o ./output
lexbuild convert-fr --all --link-style relative -o ./output

# Browse prior release points and download a specific one
lexbuild list-release-points -n 10
lexbuild download-usc --all --release-point 119-72not60

Output Directory Structure

The -o flag specifies the output root. The converter appends source subdirectories automatically:

  • convert-usc -o /path writes to /path/usc/...
  • convert-ecfr -o /path writes to /path/ecfr/...
  • convert-fr -o /path writes to /path/fr/...

This means all converters can safely target the same output root without conflicts.

USC Output Paths

GranularityPath Pattern
Section{output}/usc/title-{NN}/chapter-{NN}/section-{N}.md
Chapter{output}/usc/title-{NN}/chapter-{NN}/chapter-{NN}.md
Title{output}/usc/title-{NN}.md

eCFR Output Paths

GranularityPath Pattern
Section{output}/ecfr/title-{NN}/chapter-{X}/part-{N}/section-{N.N}.md
Part{output}/ecfr/title-{NN}/chapter-{X}/part-{N}.md
Chapter{output}/ecfr/title-{NN}/chapter-{X}.md
Title{output}/ecfr/title-{NN}.md

FR Output Paths

Path Pattern
{output}/fr/{YYYY}/{MM}/{document_number}.md

FR documents are organized by publication date. Example: {output}/fr/2026/03/2026-06029.md. No granularity options since FR documents are always one file per document.

Title directories use zero-padded two-digit numbers (title-01). USC chapter directories are zero-padded (chapter-01). eCFR chapter directories use Roman numerals (chapter-I, chapter-IV). Section numbers are not zero-padded and may contain alphanumeric characters (e.g., section-240.10b-5.md).

Exit Codes

CodeMeaning
0Success
1Error (invalid options, missing files, download failure, or conversion error)