This document tracks the plan to make jcode's self-dev / refactor loop much faster without sacrificing full-feature builds.
See also:
- Keep full-featured builds available for normal usage and self-dev reloads.
- Make common self-dev edits significantly cheaper to compile.
- Reduce how often customizations require recompilation at all.
- Measure improvements after each phase and stop churn that does not pay off.
Measured locally on the current tree:
- Warm
cargo check --quiet: ~8.5s - Warm
scripts/dev_cargo.sh build --release -p jcode --bin jcode --quiet: ~47.3s
Additional observations from this audit:
- A previous warm-ish
cargo checkrun landed around ~12.3s. - A less-warm
cargo check --timingsrun landed around ~23.8s. - The previous local default
clang + moldsetup failed during release linking on this machine. clang + lldlinks the releasejcodebinary successfully here.
For common self-dev edits that do not touch broad shared interfaces:
- Warm
cargo check: < 5s - Warm
cargo build/ reload-oriented build: < 20–30s
For shared/core edits we should still aim to stay materially below today's baseline, even if they cannot reach the same fast path.
- Workspace / crate boundaries
- Rust caches best at the crate boundary.
- Heavy untouched subsystems should remain compiled and reusable in full builds.
- Good boundary design
- High-churn logic should not live in broad fanout crates or unstable shared types.
sccache- Practical win for repeated local builds and CI.
- Fast, reliable linker configuration
- Especially important for
cargo buildand release/self-dev reload builds.
- Especially important for
- Heavy subsystem isolation
- Embeddings, provider implementations, and large TUI/rendering code should stop churning unrelated builds.
- Narrower build targets for inner loops
- Avoid rebuilding extra bins/targets when not needed.
- Reduce the need to recompile at all
- Issue #32's customization records and extension points should make many changes config/hook/skill/data driven rather than source driven.
- Keep
.cargo/config.tomlconservative for local contributors. - Use
scripts/dev_cargo.shfor local self-dev builds:- enables
sccacheautomatically if installed - prefers
clang + lldon Linux x86_64 - uses the dedicated Cargo
selfdevprofile forjcodeself-dev build/reload paths - can still opt into
moldviaJCODE_FAST_LINKER=mold
- enables
- Route refactor-shadow builds through that wrapper.
Standard self-dev checkpoints now live behind scripts/bench_selfdev_checkpoints.sh, which runs:
- cold
cargo check - warm touched-file
cargo check - cold self-dev
jcodebuild - warm touched-file self-dev
jcodebuild
Use it when capturing comparable before/after numbers for refactors.
- Add documented commands for cold/warm
checkandbuildtiming. - Prefer touched-file timings (for example
scripts/bench_compile.sh check --touch src/server.rs) over no-op hot-cache reruns when judging ROI. - Track timing deltas after each structural phase.
- Fix build/link blockers before treating any timing data as authoritative.
- 2026-03-25: upgraded
scripts/bench_compile.shto support repeated runs, summary stats, JSON output, and extra cargo-arg passthrough so compile-speed work can use consistent touched-file measurements instead of one-off ad hoc timings. - 2026-03-25: upgraded
scripts/dev_cargo.shwith--print-setupplus clearer cache/linker diagnostics so developers can confirm whethersccache/ fast-linker paths are actually active. - 2026-03-30: removed the per-build
build.rstimestamp/build-number churn from local source builds.JCODE_VERSIONfor source builds is now stable perCargo.tomlversion + git hash, while UI/version build-time display comes from the binary mtime at runtime. Validation on this machine: two no-op release-jcode runs measured 221.688s then 0.559s, confirming the main crate no longer recompiles just because build metadata changed. - 2026-04-09: introduced a dedicated Cargo
selfdevprofile for self-dev iteration. On this machine, the warm localjcodeself-dev build path dropped from about 56.1s forscripts/dev_cargo.sh build --release -p jcode --bin jcode --quietto about 16.0s forscripts/dev_cargo.sh build --profile selfdev -p jcode --bin jcode --quiet, while keeping the normal release/distribution profile unchanged. - 2026-04-18: added
scripts/bench_selfdev_checkpoints.shto standardize cold/warm self-dev checkpoints. First local checkpoint attempt on this machine surfaced two environment blockers:- cold checkpoints failed because
cargo cleancould not remove part oftarget/release(Permission deniedon a fingerprint timestamp file) - warm
selfdev-jcodetouched-file measurement onsrc/tool/read.rsfailed because thesccache-wrapped rustc process terminated with signal 15 during thejcodecrate build - warm touched-file
cargo checkonsrc/tool/read.rscompleted in 93.115s then 9.430s, which is useful as a rough upper/lower bound but not yet stable enough to treat as an authoritative checkpoint - follow-up required: fix the
target/releasepermission issue, rerun cold checkpoints, and rerun warm self-dev measurements until they are stable enough to compare against future waves
- cold checkpoints failed because
- 2026-04-18: updated
scripts/bench_selfdev_checkpoints.shto keep running after individual checkpoint failures and report them in JSON/text output instead of aborting early. Verified local output on this machine with--touch src/tool/read.rs --runs 1:- warm touched-file
cargo check: 9.582s - warm touched-file
selfdev-jcodebuild: 59.898s - failed checkpoints reported cleanly:
cold_check,cold_selfdev_build
- warm touched-file
- 2026-04-18: added
--skip-coldtoscripts/bench_selfdev_checkpoints.shso warm-only checkpoints remain usable while cold-path cleanup is blocked locally. Verified local output on this machine with--skip-cold --touch src/tool/read.rs --runs 1:- warm touched-file
cargo check: 9.339s - warm touched-file
selfdev-jcodebuild: 18.844s - skipped checkpoints reported explicitly:
cold_check,cold_selfdev_build
- warm touched-file
- 2026-04-18: additional warm-only checkpoint on a broader shared edit target with
--skip-cold --touch src/server.rs --runs 1:- warm touched-file
cargo check: 8.711s - warm touched-file
selfdev-jcodebuild: 18.969s
- warm touched-file
- 2026-04-18: additional warm-only checkpoint on a heavy tool-path file with
--skip-cold --touch src/tool/communicate.rs --runs 1:- warm touched-file
cargo check: 8.496s - warm touched-file
selfdev-jcodebuild: 21.400s
- warm touched-file
- 2026-04-18: additional warm-only checkpoint on a provider-heavy file with
--skip-cold --touch src/provider/openai.rs --runs 1:- warm touched-file
cargo check: 8.750s - warm touched-file
selfdev-jcodebuild: 21.386s
- warm touched-file
- 2026-04-18: additional warm-only checkpoint on the shared provider module with
--skip-cold --touch src/provider/mod.rs --runs 1:- warm touched-file
cargo check: 9.772s - warm touched-file
selfdev-jcodebuild: 17.917s
- warm touched-file
- 2026-04-18: additional warm-only checkpoint on the agent entry module with
--skip-cold --touch src/agent.rs --runs 1:- warm touched-file
cargo check: 7.318s - warm touched-file
selfdev-jcodebuild: 30.928s
- warm touched-file
- 2026-04-18: additional warm-only checkpoint on the memory tool with
--skip-cold --touch src/tool/memory.rs --runs 1:- warm touched-file
cargo check: 7.787s - warm touched-file
selfdev-jcodebuild: 12.798s
- warm touched-file
- 2026-04-18: additional warm-only checkpoint on session search with
--skip-cold --touch src/tool/session_search.rs --runs 1:- warm touched-file
cargo check: 7.009s - warm touched-file
selfdev-jcodebuild: 12.874s
- warm touched-file
- 2026-04-18: additional warm-only checkpoint on the browser tool with
--skip-cold --touch src/tool/browser.rs --runs 1:- warm touched-file
cargo check: 13.693s - warm touched-file
selfdev-jcodebuild: 18.874s
- warm touched-file
- 2026-04-28: diagnosed the repeated self-dev
jcodelib buildSIGTERMon this 16 GiB, no-swap workstation.journalctl -u earlyoomshowed earlyoom sendingSIGTERMto the rootrustcat about 1.09 GiB RSS when available memory crossed the 10% threshold. A direct no-sccachebuild reproduced the same signal, sosccachewas only reporting the termination.scripts/dev_cargo.shnow enables adaptive low-memory overrides for--profile selfdevwhen Linux + earlyoom + no swap + <24 GiB RAM are detected:CARGO_INCREMENTAL=0,CARGO_PROFILE_SELFDEV_INCREMENTAL=false, andCARGO_PROFILE_SELFDEV_CODEGEN_UNITS=16. UseJCODE_SELFDEV_LOW_MEMORY=offto disable, orJCODE_SELFDEV_LOW_MEMORY=onto force. Validation: the same root build completed under those settings in 2m34s after the interrupted partial build reused artifacts.
Warm-only touched-file checkpoints captured so far on this machine:
| Touched file | Warm cargo check |
Warm selfdev-jcode build |
|---|---|---|
src/tool/session_search.rs |
7.009s | 12.874s |
src/agent.rs |
7.318s | 30.928s |
src/tool/memory.rs |
7.787s | 12.798s |
src/tool/communicate.rs |
8.496s | 21.400s |
src/server.rs |
8.711s | 18.969s |
src/provider/openai.rs |
8.750s | 21.386s |
src/tool/read.rs |
9.339s | 18.844s |
src/provider/mod.rs |
9.772s | 17.917s |
src/tool/browser.rs |
13.693s | 18.874s |
Observed spread from these warm-only checkpoints:
- warm touched-file
cargo check: 7.009s to 13.693s - warm touched-file
selfdev-jcodebuild: 12.798s to 30.928s - fastest measured warm self-dev rebuilds so far are on smaller tool-path edits
src/agent.rscurrently stands out as the most expensive warm self-dev rebuild in this sample setsrc/tool/browser.rscurrently stands out as the slowest warmcargo checkin this sample set
The refined layered target, dependency rules, and migration guidance live in
docs/MODULAR_ARCHITECTURE_RFC.md. The crate list
below is the compile-performance-oriented destination sketch and should be read
as compatible with that RFC, not as the only acceptable final packaging.
Proposed destination layout:
jcode-core- protocol, ids, message types, config primitives, shared utility types
jcode-server- server lifecycle, reload, socket, swarm, daemon behaviors
jcode-agent- agent turn loop, tool orchestration, stream handling
jcode-provider- provider traits, shared provider types, routing/catalog support
jcode-embedding- embedding model integration and related heavy inference dependencies
jcode-tui- TUI rendering, widgets, state reduction, terminal UI support
jcode-selfdev- customization records, migration logic, self-dev productization
Start with the highest-leverage cache boundaries:
jcode-embedding- provider support / provider implementation splits
- self-dev/customization system once the new extension-point work lands
- server / agent split along the seams already being extracted
-
2026-03-24: moved the heavy ONNX/tokenizer implementation into the new
crates/jcode-embeddingworkspace crate. -
The main
src/embedding.rsmodule now acts as a facade for process-local cache/stats/path/logging integration. -
This preserves the public
crate::embeddingAPI while creating a real Cargo cache boundary for the heaviest embedding dependencies. -
Follow-up: gather more realistic before/after timing data using controlled touched-file benchmarks rather than fully hot no-op rebuilds.
-
2026-03-24: moved PDF extraction behind the new
crates/jcode-pdfworkspace crate and fixed the--no-default-featuresbuild path by making PDF support degrade gracefully when the feature is disabled. -
2026-03-24: moved Azure bearer-token retrieval behind the new
crates/jcode-azure-authworkspace crate so the Azure SDK no longer lives directly in the main crate. -
Note: touched-file timing for
src/auth/azure.rsneeds more instrumentation cleanup; one post-split sample was anomalous and should not be treated as a trustworthy ROI datapoint yet. -
2026-03-24: moved email notification / IMAP reply transport behind the new
crates/jcode-notify-emailworkspace crate. -
The main
src/notifications.rsmodule now keeps the higher-level ambient, safety, and channel integration while SMTP/IMAP/mail parsing lives behind a dedicated crate boundary. -
This split is primarily meant to keep
lettre,imap,mail-parser, andnative-tlsout of unrelated self-dev rebuilds; edits tonotifications.rsitself still invalidate the main crate and are not the right sole ROI metric. -
2026-03-25: landed the first provider boundary slice with
crates/jcode-provider-metadata. -
Boundary decision: provider metadata / profile catalogs / pure selection helpers move into their own crate first, while env mutation, config-file I/O, and runtime integration remain in
src/provider_catalog.rsas a facade. -
This is intentionally narrower than a full
Providertrait split: it creates a real provider-side compile boundary without prematurely dragging streaming/message/runtime dependencies into a shared crate that would likely stay high-churn. -
2026-03-25: landed the next provider-core slice with
crates/jcode-provider-core. -
Boundary decision: move shared HTTP client + route/cost/core provider value types first, but keep the
Providertrait itself insrc/provider/mod.rsfor now. -
Reason: the trait currently still mixes in
message.rs, runtime/auth behavior, and provider-specific streaming/compaction concerns; moving it too early would likely create a noisy, still-high-churn core crate. -
2026-03-25: landed the first provider-implementation support crate with
crates/jcode-provider-openrouter. -
Boundary decision: move OpenRouter-specific model catalog / endpoint cache / provider ranking / model-spec parsing support into a dedicated crate, while keeping the actual
Providertrait impl, auth wiring, and message/stream translation insrc/provider/openrouter.rs. -
Reason: this creates a real provider-implementation compile boundary now, without introducing a crate cycle through
Provider,EventStream, ormessage.rs. -
2026-03-25: landed the next provider-implementation support crate with
crates/jcode-provider-gemini. -
Boundary decision: move Gemini Code Assist schema/types, model-list constants, and pure support helpers into a dedicated crate, while keeping the actual
Providertrait impl, auth calls, and runtime/network orchestration insrc/provider/gemini.rs. -
Reason: this creates another real provider-side compile boundary without forcing the
Provider/EventStreamseam prematurely. -
2026-03-30: moved the pure OpenAI tool-schema normalization helpers into
crates/jcode-provider-core/src/openai_schema.rs. -
Boundary decision: move pure schema adaptation / strict-normalization helpers first, while keeping
build_tools(...)and request-history rewriting insrc/provider/openai_request.rsbecause those still depend on local tool/message types. -
Reason: this creates another provider-side cache boundary now without prematurely pulling
Message,ToolDefinition, or theProvidertrait into a shared crate. -
2026-03-30: moved the workspace-map subsystem into the new
crates/jcode-tui-workspacecrate. -
Boundary decision: move workspace map data/model + widget rendering first, while keeping the surrounding
info_widget, app state, and higher-level TUI composition in the main crate. -
Reason: this is a safe first
jcode-tuifoothold because the workspace map code is already mostly self-contained and avoids the much riskierApp/ renderer / markdown / mermaid seams.
- Continue shrinking giant hotspot files.
- Keep high-churn code out of stable low-level crates.
- Avoid changing shared broad fanout types casually.
- Store customization intent, provenance, validation, and migration hints.
- Add extension points so more user changes live in:
- config
- hooks
- skills
- prompt overlays
- routing/theme/layout data
- Prefer those over direct Rust source edits whenever possible.
- 2026-03-30: landed the first prompt-overlay seam for system-prompt customization without a rebuild.
jcode now loads
~/.jcode/prompt-overlay.mdand./.jcode/prompt-overlay.mdinto the static prompt, which is a low-risk first step toward the broader issue #32 customization plan.
Touched-file cargo check samples gathered during this batch:
src/server.rs: ~8.7ssrc/tool/read.rs: ~8.8ssrc/auth/azure.rsbefore Azure crate split: ~7.0ssrc/provider/openrouter.rsbefore Azure crate split: ~6.5ssrc/provider/openrouter.rsafter Azure crate split: ~6.0ssrc/notifications.rsafter notification-email crate split: ~11.4ssrc/channel.rsafter notification-email crate split: ~4.8ssrc/provider_catalog.rsafter provider-metadata split: ~5.8ssrc/provider/mod.rsafter provider-core type split: ~50.1ssrc/provider/openrouter.rsafter openrouter-support crate split: ~5.6ssrc/provider/gemini.rsafter gemini-support crate split: ~5.5s
Notes:
- The post-split touched-file measurement for
src/auth/azure.rsproduced an anomalous result and should not be treated as a reliable ROI datapoint yet. - The post-split
src/notifications.rstiming is not by itself a negative signal: touching that root module still rebuilds the main crate, while the intended win is that unrelated edits stop dragging mail transport dependencies through the same compile unit. - No-op fully hot-cache reruns can look unrealistically fast; use touched-file scenarios when evaluating structural compile-speed changes.
- Provider metadata timings should be interpreted as a first provider-side foothold, not the final provider ROI story; the larger wins should come from future provider-core / implementation splits.
- The
src/provider/mod.rstouched-file timing remains high because touching that root file still rebuilds the main crate and the auth/runtime-heavy trait logic. This stage is about carving out stable reusable pieces first, not claiming that the provider root is solved. - The
src/provider/openrouter.rstouched-file sample is more encouraging because the heavy OpenRouter-specific catalog/ranking/cache support now lives in its own crate while the main module stays a thinner wrapper. - The
src/provider/gemini.rstouched-file sample is similarly encouraging: the serde-heavy Code Assist schema and pure model-list/support helpers now live outside the main crate while the runtime wrapper remains local.
global-hotkeyis now gated behindtarget_os = "macos"instead of being compiled on all platforms.- This is a smaller win than a crate split, but it removes an unnecessary dependency subtree from Linux self-dev builds because the hotkey listener implementation is macOS-only.
- Validation: on Linux,
cargo tree -i global-hotkeyis now empty.
The next obvious heavy dependency boundaries are less clearly safe/local than the ones already landed:
- provider support remains high-value, but
src/provider/mod.rsand related implementations are broad enough that the next split should be designed carefully instead of rushed. - a future
jcode-provider-core/ provider-implementation split is still the most promising next compile-speed move, but it needs boundary design first so high-churn shared types do not create a new invalidation hotspot.
Current provider-boundary stance:
- Done:
jcode-provider-metadatafor stable login/profile catalog data and pure selection logic. - Done:
jcode-provider-corefor shared HTTP client plus route/cost/core provider value types. - Done:
jcode-provider-openrouterfor OpenRouter-specific catalog/cache/ranking/model-spec support. - Done:
jcode-provider-geminifor Gemini Code Assist schema/types and pure model support helpers. - Done:
jcode-provider-core::openai_schemafor pure OpenAI schema adaptation / strict-normalization helpers. - Not done yet:
Providertrait /EventStreamextraction and fully standalone provider impl crates. - Reason: the trait side still depends on
message.rs, auth flows, runtime behavior, and provider-specific streaming logic; the current staged split avoids turning that unstable seam into a low-value high-churn crate.
That means the best next batch should likely target either:
- a carefully designed trait seam, or
- another provider implementation support split with similarly clean boundaries.
Current TUI-boundary stance:
- Done:
jcode-tui-workspacefor workspace-map model + widget rendering. - Not done yet: broader
jcode-tuiextraction for markdown, mermaid, info widgets, and the shared renderer. - Reason: the remaining high-value TUI files are larger but still more tightly coupled to
App, config, images, side-panel state, and rendering orchestration, so they need staged extraction rather than a rushed top-level split.
Use:
scripts/dev_cargo.sh check --quiet
scripts/dev_cargo.sh build --release -p jcode --bin jcode --quiet
scripts/dev_cargo.sh build --profile selfdev -p jcode --bin jcode --quiet
scripts/dev_cargo.sh --print-setupThe wrapper:
- uses
sccacheautomatically when available - prefers
lldlocally on Linux x86_64 - uses the fast
selfdevCargo profile for self-dev build/reload workflows - avoids hard-forcing a linker mode that may be broken on a given machine
- can print the currently selected cache/linker setup with
--print-setup
Override linker mode explicitly when needed:
JCODE_FAST_LINKER=lld scripts/dev_cargo.sh build --release -p jcode --bin jcode
JCODE_FAST_LINKER=mold scripts/dev_cargo.sh build --release -p jcode --bin jcode
JCODE_FAST_LINKER=system scripts/dev_cargo.sh build --release -p jcode --bin jcodeFor compile timing, prefer repeatable touched-file measurements over no-op hot-cache reruns:
scripts/bench_compile.sh check --runs 3 --touch src/server.rs
scripts/bench_compile.sh check --runs 3 --touch src/tool/read.rs
scripts/bench_compile.sh release-jcode --runs 3
scripts/bench_compile.sh selfdev-jcode --runs 3
scripts/bench_compile.sh build -- --package jcode --bin test_api
scripts/bench_selfdev_checkpoints.sh --touch src/server.rs --runs 3bench_compile.sh now supports:
--runs <n>for repeated timings with min/median/avg/max summaries--touch <path>to simulate a local edit before each timed run--jsonfor scriptable output-- <extra cargo args>to narrow the measured target/package/bin/features
bench_selfdev_checkpoints.sh builds on that foundation to produce a single standard
self-dev checkpoint bundle for cold/warm check + build comparisons.
After each structural phase we should re-measure and ask:
- Did warm
checktime improve materially? - Did warm
build/ reload-oriented build time improve materially? - Did we reduce rebuild scope for common self-dev edits?
If not, we should avoid continuing high-churn refactors on compile-time grounds alone.