[llvm] [llvm] Proofread Docker.rst (PR #160448)

Kazu Hirata via llvm-commits llvm-commits at lists.llvm.org
Tue Sep 23 22:59:29 PDT 2025


https://github.com/kazutakahirata created https://github.com/llvm/llvm-project/pull/160448

None

>From dc9cae5420e720b420c65904ecb4ae2f640643d1 Mon Sep 17 00:00:00 2001
From: Kazu Hirata <kazu at google.com>
Date: Tue, 23 Sep 2025 08:46:19 -0700
Subject: [PATCH] [llvm] Proofread Docker.rst

---
 llvm/docs/Docker.rst | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/llvm/docs/Docker.rst b/llvm/docs/Docker.rst
index 5d976eddb3130..1832b59022323 100644
--- a/llvm/docs/Docker.rst
+++ b/llvm/docs/Docker.rst
@@ -27,8 +27,8 @@ to get a very basic explanation of it.
 `Docker <https://www.docker.com/>`_ is a popular solution for running programs in
 an isolated and reproducible environment, especially to maintain releases for
 software deployed to large distributed fleets.
-It uses linux kernel namespaces and cgroups to provide a lightweight isolation
-inside currently running linux kernel.
+It uses Linux kernel namespaces and cgroups to provide a lightweight isolation
+inside currently running Linux kernel.
 A single active instance of dockerized environment is called a *docker
 container*.
 A snapshot of a docker container filesystem is called a *docker image*.
@@ -127,17 +127,17 @@ Which image should I choose?
 We currently provide two images: Debian12-based and nvidia-cuda-based. They
 differ in the base image that they use, i.e. they have a different set of
 preinstalled binaries. Debian8 is very minimal, nvidia-cuda is larger, but has
-preinstalled CUDA libraries and allows to access a GPU, installed on your
+preinstalled CUDA libraries and allows access to a GPU, installed on your
 machine.
 
-If you need a minimal linux distribution with only clang and libstdc++ included,
+If you need a minimal Linux distribution with only clang and libstdc++ included,
 you should try Debian12-based image.
 
 If you want to use CUDA libraries and have access to a GPU on your machine,
 you should choose nvidia-cuda-based image and use `nvidia-docker
 <https://github.com/NVIDIA/nvidia-docker>`_ to run your docker containers. Note
 that you don't need nvidia-docker to build the images, but you need it in order
-to have an access to GPU from a docker container that is running the built
+to have access to GPU from a docker container that is running the built
 image.
 
 If you have a different use-case, you could create your own image based on
@@ -176,4 +176,4 @@ The first image is only used during build and does not have a descriptive name,
 i.e. it is only accessible via the hash value after the build is finished.
 The second image is our resulting image. It contains only the built binaries
 and not any build dependencies. It is also accessible via a descriptive name
-(specified by -d and -t flags).
+(specified by ``-d`` and ``-t`` flags).



More information about the llvm-commits mailing list