Skip to content

Commit 3ba04a3

Browse files
johndmulhausenngraylunadbrian57
authored
docs: use parentheses for callable names in prose (DOCS-1362) (#2327)
## Summary Aligns English documentation with the convention that **functions and methods are written with trailing `()`** when mentioned in prose (not inside code samples), for example `wandb.init()` and `run.log()`. ## Changes - Edits across **models**, **platform**, **release-notes**, **weave** (English only; `ja/`, `ko/`, and `support/` were not included). - **AGENTS.md**: adds a style bullet and links to the [Global Functions overview](https://docs.wandb.ai/models/ref/python/functions). - Small follow-ups: `wandb.controller()` link text, `wandb.restore()` in a few historical release-note bullets, and backtick cleanup in ref prose where `wandb.init()` appeared without code formatting. ## Issue Resolves DOCS-1362 Made with [Cursor](https://cursor.com) --------- Co-authored-by: Noah Luna <15202580+ngrayluna@users.noreply.github.com> Co-authored-by: Dan Brian <dbrian@coreweave.com>
1 parent a899035 commit 3ba04a3

47 files changed

Lines changed: 145 additions & 144 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

‎AGENTS.md‎

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -186,6 +186,7 @@ Use sentence case for all headings and page titles. Capitalize only the first wo
186186
- Directory paths: `runbooks/`
187187
- Commands in sentences: `git push`
188188
- Code elements: `wandb.init()`
189+
- **Callable names in prose**: When you refer to a Python function or method in running text (not inside a code sample), use parentheses with backticks, for example `wandb.init()` or `run.log()`. Module-level functions in the Python SDK are listed in the [Global Functions overview](/models/ref/python/functions).
189190

190191
### Code examples
191192

‎models/artifacts/create-a-new-artifact-version.mdx‎

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Based on your use case, select one of the tabs below to create a new artifact ve
3535
<Tab title="Inside a run">
3636
Create an artifact version within a W&B run:
3737

38-
1. Create a run with `wandb.init`.
38+
1. Create a run with `wandb.init()`.
3939
2. Create a new artifact or retrieve an existing one with `wandb.Artifact`.
4040
3. Add files to the artifact with `.add_file`.
4141
4. Log the artifact to the run with `.log_artifact`.
@@ -108,7 +108,7 @@ with wandb.init() as run:
108108

109109
#### Run 3
110110

111-
Must run after Run 1 and Run 2 complete. The Run that calls `finish_artifact` can include files in the artifact, but does not need to.
111+
Must run after Run 1 and Run 2 complete. The Run that calls `wandb.Run.finish_artifact()` can include files in the artifact, but does not need to.
112112

113113
```python
114114
with wandb.init() as run:

‎models/artifacts/storage.mdx‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ During training, W&B locally saves logs, artifacts, and configuration files in t
1111

1212
| File | Default location | To change default location set: |
1313
| ---- | ---------------- | ------------------------------- |
14-
| logs | `./wandb` | `dir` in `wandb.init` or set the `WANDB_DIR` environment variable |
14+
| logs | `./wandb` | `dir` in `wandb.init()` or set the `WANDB_DIR` environment variable |
1515
| artifacts | `~/.cache/wandb` | the `WANDB_CACHE_DIR` environment variable |
1616
| configs | `~/.config/wandb` | the `WANDB_CONFIG_DIR` environment variable |
1717
| staging artifacts for upload | `~/.cache/wandb-data/` | the `WANDB_DATA_DIR` environment variable |

‎models/integrations/accelerate.mdx‎

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ accelerator.init_trackers(
2626

2727
...
2828

29-
# Log to wandb by calling `accelerator.log`, `step` is optional
29+
# Log to wandb by calling accelerator.log(); step is optional
3030
accelerator.log({"train_loss": 1.12, "valid_loss": 0.8}, step=global_step)
3131

3232

@@ -40,8 +40,8 @@ Explaining more, you need to:
4040
- a project name via `project_name`
4141
- any parameters you want to pass to [`wandb.init()`](/models/ref/python/functions/init) via a nested dict to `init_kwargs`
4242
- any other experiment config information you want to log to your wandb run, via `config`
43-
3. Use the `.log` method to log to Weigths & Biases; the `step` argument is optional
44-
4. Call `.end_training` when finished training
43+
3. Use the `wandb.Run.log()` method to log to Weigths & Biases; the `step` argument is optional
44+
4. Call `.end_training()` when finished training
4545

4646
## Access the W&B tracker
4747

‎models/integrations/add-wandb-to-any-library.mdx‎

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -144,11 +144,11 @@ with wandb.init(project="<project_name>", entity="<entity>") as run:
144144
W&B recommends that you use a context manager to ensure that your run is properly closed, even if an error occurs. If you do not use a context manager, you must call `run.finish()` to close the run and log all the data to W&B.
145145

146146
<Tip>
147-
**When to call `wandb.init`**
147+
**When to call `wandb.init()`**
148148

149149
Call `wandb.init()` as early as possible. W&B captures stdout, stderr, and error messages, which makes debugging easier.
150150

151-
Wrap your entire training loop in a `wandb.init` context manager to ensure that all relevant information is captured in the run. This includes any error messages, which can be crucial for debugging.
151+
Wrap your entire training loop in a `wandb.init()` context manager to ensure that all relevant information is captured in the run. This includes any error messages, which can be crucial for debugging.
152152
</Tip>
153153

154154
### Set `wandb` as an optional dependency
@@ -170,7 +170,7 @@ python train.py ... --use-wandb
170170
</Tab>
171171
</Tabs>
172172

173-
* Or, set `wandb` to be `disabled` in `wandb.init`:
173+
* Or, set `wandb` to be `disabled` in `wandb.init()`:
174174

175175
<Tabs>
176176
<Tab title="Python">
@@ -292,7 +292,7 @@ See [`wandb.Run.log()`](/models/ref/python/experiments/run#method-run-log) for m
292292

293293
If you perform multiple calls to `wandb.Run.log()` for the same training step, the wandb SDK increments an internal step counter for each call to `wandb.Run.log()`. This counter may not align with the training step in your training loop.
294294

295-
To avoid this situation, define your x-axis step explicitly with `run.define_metric`, one time, immediately after you call `wandb.init`:
295+
To avoid this situation, define your x-axis step explicitly with `wandb.Run.define_metric()`, one time, immediately after you call `wandb.init()`:
296296

297297
```python
298298
with wandb.init(...) as run:

‎models/integrations/farama-gymnasium.mdx‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ description: "Integrate W&B with Farama Gymnasium to track reinforcement learnin
33
title: Farama Gymnasium
44
---
55

6-
If you're using [Farama Gymnasium](https://gymnasium.farama.org/#) we will automatically log videos of your environment generated by `gymnasium.wrappers.Monitor`. Just set the `monitor_gym` keyword argument to [`wandb.init`](/models/ref/python/functions/init) to `True`.
6+
If you're using [Farama Gymnasium](https://gymnasium.farama.org/#) we will automatically log videos of your environment generated by `gymnasium.wrappers.Monitor`. Just set the `monitor_gym` keyword argument to [`wandb.init()`](/models/ref/python/functions/init) to `True`.
77

88
Our gymnasium integration is very light. We simply [look at the name of the video file](https://github.com/wandb/wandb/blob/c5fe3d56b155655980611d32ef09df35cd336872/wandb/integration/gym/__init__.py#LL69C67-L69C67) being logged from `gymnasium` and name it after that or fall back to `"videos"` if we don't find a match. If you want more control, you can always just manually [log a video](/models/track/log/media/).
99

‎models/integrations/fastai.mdx‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,7 @@ notebook_launcher(train, num_processes=2)
179179

180180
### Log only on the main process
181181

182-
In the examples above, `wandb` launches one run per process. At the end of the training, you will end up with two runs. This can sometimes be confusing, and you may want to log only on the main process. To do so, you will have to detect in which process you are manually and avoid creating runs (calling `wandb.init` in all other processes)
182+
In the examples above, `wandb` launches one run per process. At the end of the training, you will end up with two runs. This can sometimes be confusing, and you may want to log only on the main process. To do so, you will have to detect in which process you are manually and avoid creating runs (calling `wandb.init()` in all other processes)
183183

184184
<Tabs>
185185
<Tab title="Script">

‎models/integrations/huggingface_transformers.mdx‎

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -453,9 +453,9 @@ WANDB_SILENT=true
453453
</Tabs>
454454

455455

456-
### How do I customize `wandb.init`?
456+
### How do I customize `wandb.init()`?
457457

458-
The `WandbCallback` that `Trainer` uses will call `wandb.init` under the hood when `Trainer` is initialized. You can alternatively set up your runs manually by calling `wandb.init` before the`Trainer` is initialized. This gives you full control over your W&B run configuration.
458+
The `WandbCallback` that `Trainer` uses will call `wandb.init()` under the hood when `Trainer` is initialized. You can alternatively set up your runs manually by calling `wandb.init()` before the`Trainer` is initialized. This gives you full control over your W&B run configuration.
459459

460460
An example of what you might want to pass to `init` is below. For `wandb.init()` details, see the [`wandb.init()` reference](/models/ref/python/functions/init).
461461

‎models/integrations/kubeflow-pipelines-kfp.mdx‎

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -147,9 +147,9 @@ Here's a mapping of Kubeflow Pipelines concepts to W&B
147147

148148
## Fine-grain logging
149149

150-
If you want finer control of logging, you can sprinkle in `wandb.log` and `wandb.log_artifact` calls in the component.
150+
If you want finer control of logging, you can sprinkle in `wandb.log()` and `wandb.log_artifact()` calls in the component.
151151

152-
### With explicit `wandb.log_artifacts` calls
152+
### With explicit `wandb.log_artifact()` calls
153153

154154
In this example below, we are training a model. The `@wandb_log` decorator will automatically track the relevant inputs and outputs. If you want to log the training process, you can explicitly add that logging like so:
155155

‎models/integrations/lightning.mdx‎

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ trainer = Trainer(logger=wandb_logger)
2525
```
2626

2727
<Note>
28-
**Using wandb.log():** The `WandbLogger` logs to W&B using the Trainer's `global_step`. If you make additional calls to `wandb.log` directly in your code, **do not** use the `step` argument in `wandb.log()`.
28+
**Using wandb.log():** The `WandbLogger` logs to W&B using the Trainer's `global_step`. If you make additional calls to `wandb.log()` directly in your code, **do not** use the `step` argument in `wandb.log()`.
2929

3030
Instead, log the Trainer's `global_step` like your other metrics:
3131

@@ -295,7 +295,7 @@ for epoch in range(num_epochs):
295295

296296
Using wandb's [`define_metric`](/models/ref/python/experiments/run#define_metric) function you can define whether you'd like your W&B summary metric to display the min, max, mean or best value for that metric. If `define`_`metric` _ isn't used, then the last value logged with appear in your summary metrics. See the `define_metric` [reference docs here](/models/ref/python/experiments/run#define_metric) and the [guide here](/models/track/log/customize-logging-axes/) for more.
297297

298-
To tell W&B to keep track of the max validation accuracy in the W&B summary metric, call `wandb.define_metric` only once, at the beginning of training:
298+
To tell W&B to keep track of the max validation accuracy in the W&B summary metric, call `wandb.define_metric()` only once, at the beginning of training:
299299

300300
<Tabs>
301301
<Tab title="PyTorch Logger">
@@ -396,7 +396,7 @@ Here you can organize your best models by task, manage model lifecycle, facilita
396396

397397
The `WandbLogger` has `log_image`, `log_text` and `log_table` methods for logging media.
398398

399-
You can also directly call `wandb.log` or `trainer.logger.experiment.log` to log other media types such as Audio, Molecules, Point Clouds, 3D Objects and more.
399+
You can also directly call `wandb.log()` or `trainer.logger.experiment.log()` to log other media types such as Audio, Molecules, Point Clouds, 3D Objects and more.
400400

401401
<Tabs>
402402
<Tab title="Log Images">

0 commit comments

Comments
 (0)