Skip to content

Commit 153a2a7

Browse files
star1327prossbar
andauthored
DOC: Correct typos in numpy tutorials (#269)
* DOC: traini -> train * DOC: Correct typos in mooreslaw-tutorial.md * DOC: Correct punctuation usage in Sentiment Analysis tutorial * DOC: weights values -> weights * Update content/mooreslaw-tutorial.md Co-authored-by: Ross Barnowski <rossbar@caltech.edu>
1 parent 2209cbd commit 153a2a7

File tree

4 files changed

+27
-25
lines changed

4 files changed

+27
-25
lines changed

content/mooreslaw-tutorial.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ $B_M=-675.4$
106106

107107
Since the function represents Moore's law, define it as a Python
108108
function using
109-
[`lambda`](https://docs.python.org/3/library/ast.html?highlight=lambda#ast.Lambda)
109+
[`lambda`](https://docs.python.org/3/library/ast.html?highlight=lambda#ast.Lambda):
110110

111111
```{code-cell}
112112
A_M = np.log(2) / 2
@@ -156,7 +156,7 @@ The extra options below will put the data in the desired format:
156156

157157
* `delimiter = ','`: specify delimeter as a comma ',' (this is the default behavior)
158158
* `usecols = [1,2]`: import the second and third columns from the csv
159-
* `skiprows = 1`: do not use the first row, because its a header row
159+
* `skiprows = 1`: do not use the first row, because it's a header row
160160

161161
```{code-cell}
162162
data = np.loadtxt("transistor_data.csv", delimiter=",", usecols=[1, 2], skiprows=1)
@@ -282,7 +282,7 @@ In the next plot, use the
282282
[`fivethirtyeight`](https://matplotlib.org/3.1.1/gallery/style_sheets/fivethirtyeight.html)
283283
style sheet.
284284
The style sheet replicates
285-
https://fivethirtyeight.com elements. Change the matplotlib style with
285+
<https://fivethirtyeight.com> elements. Change the matplotlib style with
286286
[`plt.style.use`](https://matplotlib.org/3.3.2/api/style_api.html#matplotlib.style.use).
287287

288288
```{code-cell}
@@ -334,7 +334,7 @@ option,
334334
to increase the transparency of the data. The more opaque the points
335335
appear, the more reported values lie on that measurement. The green $+$
336336
is the average reported transistor count for 2017. Plot your predictions
337-
for $\pm\frac{1}{2}~years.
337+
for $\pm\frac{1}{2}$ years.
338338

339339
```{code-cell}
340340
transistor_count2017 = transistor_count[year == 2017]
@@ -386,7 +386,7 @@ array using `np.loadtxt`, to save your model use two approaches
386386
### Zipping the arrays into a file
387387
Using `np.savez`, you can save thousands of arrays and give them names. The
388388
function `np.load` will load the arrays back into the workspace as a
389-
dictionary. You'll save a five arrays so the next user will have the year,
389+
dictionary. You'll save five arrays so the next user will have the year,
390390
transistor count, predicted transistor count, Gordon Moore's
391391
predicted count, and fitting constants. Add one more variable that other users can use to
392392
understand the model, `notes`.

content/tutorial-deep-learning-on-mnist.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -384,7 +384,7 @@ layer.)](_static/tutorial-deep-learning-on-mnist.png)
384384

385385
In the beginning of model training, your network randomly initializes the weights and feeds the input data forward through the hidden and output layers. This process is the forward pass or forward propagation.
386386

387-
Then, the network propagates the "signal" from the loss function back through the hidden layer and adjusts the weights values with the help of the learning rate parameter (more on that later).
387+
Then, the network propagates the "signal" from the loss function back through the hidden layer and adjusts the weights with the help of the learning rate parameter (more on that later).
388388

389389
> **Note:** In more technical terms, you:
390390
>

content/tutorial-deep-reinforcement-learning-with-pong-from-pixels.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -552,7 +552,7 @@ while episode_number < max_episodes:
552552

553553
A few notes:
554554

555-
- If you have previously run an experiment and want to repeat it, your `Monitor` instance may still be running, which may throw an error the next time you try to traini the agent. Therefore, you should first shut down `Monitor` by calling `env.close()` by uncommenting and running the cell below:
555+
- If you have previously run an experiment and want to repeat it, your `Monitor` instance may still be running, which may throw an error the next time you try to train the agent. Therefore, you should first shut down `Monitor` by calling `env.close()` by uncommenting and running the cell below:
556556

557557
```python
558558
# env.close()

0 commit comments

Comments
 (0)