Skip to content

Commit 405d692

Browse files
committed
coco docs: Remove trailing whitespaces
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
1 parent 87ae31f commit 405d692

1 file changed

Lines changed: 43 additions & 43 deletions

File tree

‎confidential-containers/overview.rst‎

Lines changed: 43 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Overview
3131
========
3232
NVIDIA GPUs power the training and deployment of Frontier Models—world-class Large Language Models (LLMs) that define the state of the art in AI reasoning and capability.
3333

34-
As organizations adopt these models in regulated industries such as financial services, healthcare, and the public sector, protecting model intellectual property and sensitive user data becomes essential. Additionally, the model deployment landscape is evolving to include public clouds, enterprise on-premises, and edge. A zero-trust posture on cloud-native platforms such as Kubernetes is essential to secure assets (model IP and enterprise private data) from untrusted infrastructure with privileged user access.
34+
As organizations adopt these models in regulated industries such as financial services, healthcare, and the public sector, protecting model intellectual property and sensitive user data becomes essential. Additionally, the model deployment landscape is evolving to include public clouds, enterprise on-premises, and edge. A zero-trust posture on cloud-native platforms such as Kubernetes is essential to secure assets (model IP and enterprise private data) from untrusted infrastructure with privileged user access.
3535

3636
Securing data at rest and in transit is standard. Protecting data in-use remains a critical gap. Confidential Computing (CC) addresses this gap by providing isolation, encryption, and integrity verification of proprietary application code and sensitive data during processing. CC uses hardware-based Trusted Execution Environments (TEEs), such as AMD SEV-SNP / Intel TDX technologies, and NVIDIA Confidential Computing capabilities to create trusted enclaves.
3737

@@ -66,7 +66,7 @@ Use Cases
6666

6767
The target for Confidential Containers is to enable model providers (Closed and Open source) and Enterprises to leverage the advancements of Gen AI, agnostic to the deployment model (Cloud, Enterprise, or Edge). Some of the key use cases that CC and Confidential Containers enable are:
6868

69-
* **Zero-Trust AI & IP Protection:** You can deploy proprietary models (like LLMs) on third-party or private infrastructure. The model weights remain encrypted and are only decrypted inside the hardware-protected enclave, ensuring absolute IP protection from the host.
69+
* **Zero-Trust AI & IP Protection:** You can deploy proprietary models (like LLMs) on third-party or private infrastructure. The model weights remain encrypted and are only decrypted inside the hardware-protected enclave, ensuring absolute IP protection from the host.
7070
* **Data Clean Rooms:** This allows you to process sensitive enterprise data (like financial analytics or healthcare records) securely. Neither the infrastructure provider nor the model builder can see the raw data.
7171

7272
.. image:: graphics/CoCo-Sample-Workflow.png
@@ -81,7 +81,7 @@ Software Components for Confidential Containers
8181

8282
The following is a brief overview of the software components for Confidential Containers.
8383

84-
**Kata Containers**
84+
**Kata Containers**
8585

8686
Acts as the secure isolation layer by running standard Kubernetes Pods inside lightweight, hardware-isolated Utility VMs (UVMs) rather than sharing the untrusted host kernel. Kata containers are integrated with the Kubernetes `Agent Sandbox <https://github.com/kubernetes-sigs/agent-sandbox>`_ project to deliver sandboxing capabilities.
8787

@@ -127,9 +127,9 @@ A minimal, chiseled and hardened init system that securely bootstraps the guest
127127
Software Stack and Component Versions
128128
--------------------------------------
129129

130-
The following is the component stack to support the open Reference Architecture (RA) along with the proposed versions of different SW components.
130+
The following is the component stack to support the open Reference Architecture (RA) along with the proposed versions of different SW components.
131131

132-
.. flat-table::
132+
.. flat-table::
133133
:header-rows: 1
134134

135135
* - Category
@@ -142,39 +142,39 @@ The following is the component stack to support the open Reference Architecture
142142
| Blackwell RTX Pro 6000
143143
* - CPU Platform
144144
- | AMD Genoa/ Milan
145-
| Intel ER/ GR
146-
* - :rspan:`7` **Host SW Components**
147-
- Host OS
148-
- 25.10
149-
* - Host Kernel
150-
- 6.17+
151-
* - Guest OS
152-
- Distroless
153-
* - Guest kernel
154-
- 6.18.5
155-
* - OVMF
156-
- edk2-stable202511
157-
* - QEMU
158-
- 10.1 \+ Patches
159-
* - Containerd
160-
- 2.2.2 \+
161-
* - Kubernetes
162-
- 1.32 \+
163-
* - :rspan:`3` **Confidential Containers Core Components**
164-
- NFD
165-
- v0.6.0
166-
* - NVIDIA/gpu-operator
167-
| - NVIDIA VFIO Manager
168-
| - NVIDIA Sandbox device plugin
145+
| Intel ER/ GR
146+
* - :rspan:`7` **Host SW Components**
147+
- Host OS
148+
- 25.10
149+
* - Host Kernel
150+
- 6.17+
151+
* - Guest OS
152+
- Distroless
153+
* - Guest kernel
154+
- 6.18.5
155+
* - OVMF
156+
- edk2-stable202511
157+
* - QEMU
158+
- 10.1 \+ Patches
159+
* - Containerd
160+
- 2.2.2 \+
161+
* - Kubernetes
162+
- 1.32 \+
163+
* - :rspan:`3` **Confidential Containers Core Components**
164+
- NFD
165+
- v0.6.0
166+
* - NVIDIA/gpu-operator
167+
| - NVIDIA VFIO Manager
168+
| - NVIDIA Sandbox device plugin
169169
| - NVIDIA Confidential Computing Manager for Kubernetes
170170
| - NVIDIA Kata Manager for Kubernetes
171-
- v25.10.0 and higher
172-
* - CoCo release (EA)
173-
| - Kata 3.25 (w/ kata-deploy helm)
174-
| - Trustee/Guest components 0.17.0
175-
| - KBS protocol 0.4.0
176-
- v0.18.0
177-
171+
- v25.10.0 and higher
172+
* - CoCo release (EA)
173+
| - Kata 3.25 (w/ kata-deploy helm)
174+
| - Trustee/Guest components 0.17.0
175+
| - KBS protocol 0.4.0
176+
- v0.18.0
177+
178178

179179
Cluster Topology Considerations
180180
-------------------------------
@@ -227,19 +227,19 @@ Refer to the *Confidential Computing Deployment Guide* at the `Confidential Comp
227227

228228
The following topics in the deployment guide apply to a cloud-native environment:
229229

230-
* Hardware selection and initial hardware configuration, such as BIOS settings.
230+
* Hardware selection and initial hardware configuration, such as BIOS settings.
231231
* Host operating system selection, initial configuration, and validation.
232232

233233
When following the cloud-native sections in the deployment guide linked above, use Ubuntu 25.10 as the host OS with its default kernel version and configuration.
234234

235235
The remaining configuration topics in the deployment guide do not apply to a cloud-native environment. NVIDIA GPU Operator performs the actions that are described in these topics.
236236

237237
Limitations and Restrictions for CoCo EA
238-
----------------------------------------
238+
----------------------------------------
239239

240-
* Only the AMD platform using SEV-SNP is supported for Confidential Containers Early Access.
241-
* GPUs are available to containers as a single GPU in passthrough mode only. Multi-GPU passthrough and vGPU are not supported.
242-
* Support is limited to initial installation and configuration only. Upgrade and configuration of existing clusters to configure confidential computing is not supported.
243-
* Support for confidential computing environments is limited to the implementation described on this page.
244-
* NVIDIA supports the GPU Operator and confidential computing with the containerd runtime only.
240+
* Only the AMD platform using SEV-SNP is supported for Confidential Containers Early Access.
241+
* GPUs are available to containers as a single GPU in passthrough mode only. Multi-GPU passthrough and vGPU are not supported.
242+
* Support is limited to initial installation and configuration only. Upgrade and configuration of existing clusters to configure confidential computing is not supported.
243+
* Support for confidential computing environments is limited to the implementation described on this page.
244+
* NVIDIA supports the GPU Operator and confidential computing with the containerd runtime only.
245245
* NFD doesn't label all Confidential Container capable nodes as such automatically. In some cases, users must manually label nodes to deploy the NVIDIA Confidential Computing Manager for Kubernetes operand onto these nodes as described in the deployment guide.

0 commit comments

Comments
 (0)