You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
Reorganize parallel programming lectures and improve content flow (#429)
* Reorganize parallel programming lectures and improve content flow
Major restructuring of parallelization-related content across lectures to
improve pedagogical flow and consolidate related material.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
* Complete JAX intro lecture: add pure functions content and fix errors
- Add comprehensive "Random numbers and pure functions" section
- Demonstrate NumPy's impure random number generation vs JAX's pure approach
- Fix spelling errors: discusson→discussion, explict→explicit, parallelizaton→parallelization, hardward→hardware, sleve→sleeve, targetting→targeting
- Fix grammar: "uses use"→"uses", "short that"→"shorter than", "function will"→"functions will", "Prevents"→"Prevent"
- Fix missing jax. prefix in random number examples
- Improve clarity and consistency throughout
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
* Reorganize JAX intro lecture for improved pedagogical flow
Major restructuring:
- Move Functional Programming section earlier (after NumPy Replacement)
- Integrate pure functions discussion into Random Numbers section
- Move "Compiling non-pure functions" into JIT section
- Add smooth transitions between sections
This creates a logical progression: basics → philosophy → features
Readers now understand WHY before seeing HOW, making JAX's design
choices (like explicit random state) more intuitive.
Also fix syntax errors in timer code blocks (missing colons).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
* Improve lecture flow and reorganize content
This commit includes pedagogical improvements across three lectures:
**numba.md:**
- Improve sentence flow with better transitions
- Change Wikipedia multithreading link to internal reference
- Add "(multithreading)=" label to Multithreaded Loops section
- Remove "Numba will be a key part of our lectures..." sentence
- Add transition phrase "Beyond speed gains from compilation"
- Clarify NumPy arrays "which have well-defined types"
- Change "For example" to "Notably" for better flow
- Add "Conversely" transition for prange vs range comparison
**numpy.md:**
- Add "### Basics" subheading for better organization
- Emphasize "flat" array concept with bold formatting
- Improve shape attribute explanation with inline comments
- Remove np.asarray vs np.array comparison examples
- Remove np.genfromtxt reference, keep only np.loadtxt
- Remove redundant note about zero-based indices
- Improve searchsorted() description formatting
- Remove redundant NumPy function examples (np.sum, np.mean)
- Simplify matrix multiplication section (remove old Python version notes)
- Simplify @ operator examples, remove redundant demonstrations
- Remove manual for-loop equivalent of broadcasting
- Remove higher-dimensional broadcasting code examples
- Remove higher-dimensional ValueError example
- Add "### Mutability" subheading and improve organization
- Change "Vectorized Functions" to "Universal Functions" heading
- Emphasize terminology with bold: **vectorized functions**, **ufuncs**, **universal functions**
- Add note about JAX's np.vectorize
- Remove "Speed Comparisons" section (moved to numpy_vs_numba_vs_jax.md)
- Remove "Implicit Multithreading in NumPy" section (moved to numpy_vs_numba_vs_jax.md)
**numpy_vs_numba_vs_jax.md:**
- Change title from "Parallelization" to "NumPy vs Numba vs JAX"
- Add jax to pip install command
- Add missing imports: random, mpl_toolkits.mplot3d, matplotlib.cm
- Add "### Speed Comparisons" section (moved from numpy.md)
- Add "### Vectorization vs Loops" section (moved from numpy.md)
- Add "### Universal Functions" section (moved from numpy.md)
- Add "### Implicit Multithreading in NumPy" section (moved from numpy.md)
- Change "some examples" to "an example" in multithreading description
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
* Fix grammar and reorganize need_for_speed.md content
- Fix incomplete sentence: add missing word "compiler" (line 229)
- Fix header level inconsistency: change Multi-GPU Servers to #####
- Reorganize Overview section with clearer structure
- Simplify Python's Scientific Ecosystem section
- Restructure "Pure Python is slow" section for better flow
- Add concrete vectorization speed comparison example
- Improve parallelization section organization
- Clarify GPU/TPU accelerator discussion
- Remove redundant content and improve transitions throughout
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
* Fix missing random module import in need_for_speed.md
Add missing `import random` statement to fix NameError when running
the vectorization example code that uses random.uniform().
Tested by converting to Python with jupytext and running successfully.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
* Fix header hierarchy inconsistencies in need_for_speed.md
Changed level 5 headers (#####) to level 4 headers (####) to fix
invalid header hierarchy that was causing build failures.
Fixed headers:
- "GPUs and TPUs"
- "Why TPUs/GPUs Matter"
- "Single GPU Systems"
- "Multi-GPU Servers"
These were incorrectly using ##### (level 5) directly under ### (level 3)
headers, skipping level 4. Now properly using #### headers.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
* Complete numpy_vs_numba_vs_jax lecture with timing comparisons
Added comprehensive comparisons between NumPy, Numba, and JAX for both
vectorized and sequential operations:
- Added Numba simple loop and parallel versions for vectorized example
- Demonstrated nested prange parallelization and its limitations
- Added detailed discussion of parallelization overhead and contention issues
- Implemented sequential operation (quadratic map) in both Numba and JAX
- Used JAX lax.scan with @partial(jax.jit, static_argnums) for cleaner code
- Added timing code with separate runs to show compile vs cached performance
- Included educational discussion without specific numbers (machine-independent)
- Added explanation of reduction problem challenges with shared variable updates
- Fixed spelling error: "implict" → "implicit"
- Added missing punctuation
All code examples tested and verified to run successfully.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
* misc
* Improve formatting and clarity across parallel computing lectures
- Standardize header capitalization in need_for_speed.md
- Update code cell types to ipython3 in numba.md for consistency
- Remove redundant parallelization warning section in numba.md
- Enhance explanatory text and code clarity in numpy_vs_numba_vs_jax.md
- Fix formatting and add missing validation checks
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <[email protected]>
* misc
---------
Co-authored-by: Claude <[email protected]>