GPU Systems 11 - Shared Memory Bank Conflicts
Why shared memory is not automatically fast and how bank conflicts appear
Why shared memory is not automatically fast and how bank conflicts appear
Why warp-level primitives matter for reductions and lighter-weight cooperation
Using reduction kernels to connect shared memory, warp primitives, and synchronization
How softmax combines reductions, memory traffic, and numerical stability in one kernel
Why normalization kernels are often memory-bound and structurally important
How wider memory operations and alignment affect bandwidth utilization
Why using more registers can improve local efficiency but still reduce total throughput
How asynchronous copy and double buffering help overlap memory movement with computation
PyTorch GPU memory behavior is shaped by a caching allocator, so observed memory usage is not just a story about current tensor objects