You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: use parentheses for callable names in prose (DOCS-1362) (#2327)
## Summary
Aligns English documentation with the convention that **functions and
methods are written with trailing `()`** when mentioned in prose (not
inside code samples), for example `wandb.init()` and `run.log()`.
## Changes
- Edits across **models**, **platform**, **release-notes**, **weave**
(English only; `ja/`, `ko/`, and `support/` were not included).
- **AGENTS.md**: adds a style bullet and links to the [Global Functions
overview](https://docs.wandb.ai/models/ref/python/functions).
- Small follow-ups: `wandb.controller()` link text, `wandb.restore()` in
a few historical release-note bullets, and backtick cleanup in ref prose
where `wandb.init()` appeared without code formatting.
## Issue
Resolves DOCS-1362
Made with [Cursor](https://cursor.com)
---------
Co-authored-by: Noah Luna <15202580+ngrayluna@users.noreply.github.com>
Co-authored-by: Dan Brian <dbrian@coreweave.com>
Copy file name to clipboardExpand all lines: AGENTS.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -186,6 +186,7 @@ Use sentence case for all headings and page titles. Capitalize only the first wo
186
186
- Directory paths: `runbooks/`
187
187
- Commands in sentences: `git push`
188
188
- Code elements: `wandb.init()`
189
+
-**Callable names in prose**: When you refer to a Python function or method in running text (not inside a code sample), use parentheses with backticks, for example `wandb.init()` or `run.log()`. Module-level functions in the Python SDK are listed in the [Global Functions overview](/models/ref/python/functions).
Copy file name to clipboardExpand all lines: models/integrations/add-wandb-to-any-library.mdx
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -144,11 +144,11 @@ with wandb.init(project="<project_name>", entity="<entity>") as run:
144
144
W&B recommends that you use a context manager to ensure that your run is properly closed, even if an error occurs. If you do not use a context manager, you must call `run.finish()` to close the run and log all the data to W&B.
145
145
146
146
<Tip>
147
-
**When to call `wandb.init`**
147
+
**When to call `wandb.init()`**
148
148
149
149
Call `wandb.init()` as early as possible. W&B captures stdout, stderr, and error messages, which makes debugging easier.
150
150
151
-
Wrap your entire training loop in a `wandb.init` context manager to ensure that all relevant information is captured in the run. This includes any error messages, which can be crucial for debugging.
151
+
Wrap your entire training loop in a `wandb.init()` context manager to ensure that all relevant information is captured in the run. This includes any error messages, which can be crucial for debugging.
* Or, set `wandb` to be `disabled` in `wandb.init`:
173
+
* Or, set `wandb` to be `disabled` in `wandb.init()`:
174
174
175
175
<Tabs>
176
176
<Tabtitle="Python">
@@ -292,7 +292,7 @@ See [`wandb.Run.log()`](/models/ref/python/experiments/run#method-run-log) for m
292
292
293
293
If you perform multiple calls to `wandb.Run.log()` for the same training step, the wandb SDK increments an internal step counter for each call to `wandb.Run.log()`. This counter may not align with the training step in your training loop.
294
294
295
-
To avoid this situation, define your x-axis step explicitly with `run.define_metric`, one time, immediately after you call `wandb.init`:
295
+
To avoid this situation, define your x-axis step explicitly with `wandb.Run.define_metric()`, one time, immediately after you call `wandb.init()`:
Copy file name to clipboardExpand all lines: models/integrations/farama-gymnasium.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ description: "Integrate W&B with Farama Gymnasium to track reinforcement learnin
3
3
title: Farama Gymnasium
4
4
---
5
5
6
-
If you're using [Farama Gymnasium](https://gymnasium.farama.org/#) we will automatically log videos of your environment generated by `gymnasium.wrappers.Monitor`. Just set the `monitor_gym` keyword argument to [`wandb.init`](/models/ref/python/functions/init) to `True`.
6
+
If you're using [Farama Gymnasium](https://gymnasium.farama.org/#) we will automatically log videos of your environment generated by `gymnasium.wrappers.Monitor`. Just set the `monitor_gym` keyword argument to [`wandb.init()`](/models/ref/python/functions/init) to `True`.
7
7
8
8
Our gymnasium integration is very light. We simply [look at the name of the video file](https://github.com/wandb/wandb/blob/c5fe3d56b155655980611d32ef09df35cd336872/wandb/integration/gym/__init__.py#LL69C67-L69C67) being logged from `gymnasium` and name it after that or fall back to `"videos"` if we don't find a match. If you want more control, you can always just manually [log a video](/models/track/log/media/).
In the examples above, `wandb` launches one run per process. At the end of the training, you will end up with two runs. This can sometimes be confusing, and you may want to log only on the main process. To do so, you will have to detect in which process you are manually and avoid creating runs (calling `wandb.init` in all other processes)
182
+
In the examples above, `wandb` launches one run per process. At the end of the training, you will end up with two runs. This can sometimes be confusing, and you may want to log only on the main process. To do so, you will have to detect in which process you are manually and avoid creating runs (calling `wandb.init()` in all other processes)
Copy file name to clipboardExpand all lines: models/integrations/huggingface_transformers.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -453,9 +453,9 @@ WANDB_SILENT=true
453
453
</Tabs>
454
454
455
455
456
-
### How do I customize `wandb.init`?
456
+
### How do I customize `wandb.init()`?
457
457
458
-
The `WandbCallback` that `Trainer` uses will call `wandb.init` under the hood when `Trainer` is initialized. You can alternatively set up your runs manually by calling `wandb.init` before the`Trainer` is initialized. This gives you full control over your W&B run configuration.
458
+
The `WandbCallback` that `Trainer` uses will call `wandb.init()` under the hood when `Trainer` is initialized. You can alternatively set up your runs manually by calling `wandb.init()` before the`Trainer` is initialized. This gives you full control over your W&B run configuration.
459
459
460
460
An example of what you might want to pass to `init` is below. For `wandb.init()` details, see the [`wandb.init()` reference](/models/ref/python/functions/init).
Copy file name to clipboardExpand all lines: models/integrations/kubeflow-pipelines-kfp.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -147,9 +147,9 @@ Here's a mapping of Kubeflow Pipelines concepts to W&B
147
147
148
148
## Fine-grain logging
149
149
150
-
If you want finer control of logging, you can sprinkle in `wandb.log` and `wandb.log_artifact` calls in the component.
150
+
If you want finer control of logging, you can sprinkle in `wandb.log()` and `wandb.log_artifact()` calls in the component.
151
151
152
-
### With explicit `wandb.log_artifacts` calls
152
+
### With explicit `wandb.log_artifact()` calls
153
153
154
154
In this example below, we are training a model. The `@wandb_log` decorator will automatically track the relevant inputs and outputs. If you want to log the training process, you can explicitly add that logging like so:
**Using wandb.log():** The `WandbLogger` logs to W&B using the Trainer's `global_step`. If you make additional calls to `wandb.log` directly in your code, **do not** use the `step` argument in `wandb.log()`.
28
+
**Using wandb.log():** The `WandbLogger` logs to W&B using the Trainer's `global_step`. If you make additional calls to `wandb.log()` directly in your code, **do not** use the `step` argument in `wandb.log()`.
29
29
30
30
Instead, log the Trainer's `global_step` like your other metrics:
31
31
@@ -295,7 +295,7 @@ for epoch in range(num_epochs):
295
295
296
296
Using wandb's [`define_metric`](/models/ref/python/experiments/run#define_metric) function you can define whether you'd like your W&B summary metric to display the min, max, mean or best value for that metric. If `define`_`metric` _ isn't used, then the last value logged with appear in your summary metrics. See the `define_metric`[reference docs here](/models/ref/python/experiments/run#define_metric) and the [guide here](/models/track/log/customize-logging-axes/) for more.
297
297
298
-
To tell W&B to keep track of the max validation accuracy in the W&B summary metric, call `wandb.define_metric` only once, at the beginning of training:
298
+
To tell W&B to keep track of the max validation accuracy in the W&B summary metric, call `wandb.define_metric()` only once, at the beginning of training:
299
299
300
300
<Tabs>
301
301
<Tabtitle="PyTorch Logger">
@@ -396,7 +396,7 @@ Here you can organize your best models by task, manage model lifecycle, facilita
396
396
397
397
The `WandbLogger` has `log_image`, `log_text` and `log_table` methods for logging media.
398
398
399
-
You can also directly call `wandb.log` or `trainer.logger.experiment.log` to log other media types such as Audio, Molecules, Point Clouds, 3D Objects and more.
399
+
You can also directly call `wandb.log()` or `trainer.logger.experiment.log()` to log other media types such as Audio, Molecules, Point Clouds, 3D Objects and more.
0 commit comments