![solve any problem with indirection solve any problem with indirection](https://codinginterviewrecipes.files.wordpress.com/2020/08/problem-solving.jpg)
![solve any problem with indirection solve any problem with indirection](https://33.cdn.ekm.net/ekmps/shops/sweet/images/beer-doesn-t-solve-any-problems-metal-wall-sign-4-sizes--sign-size-jumbo-500mm-x-700mm-16662-p.jpg)
Manufacturers disguise this weakness by emphasizing improvements in data transfer throughput, but latency continues to stagnate. In particular, data transfer latency is the Achille’s heel of data devices (both RAM and storage). Data good, pointers badĪ pesky fact of computing is that computers can compute far faster than we can deliver data to compute on. In this essay, I’ll talk about what makes NumPy so effective, and where the next generation of Python numerical computing libraries (e.g. TensorFlow, PyTorch, JAX) seems to be headed. The sheer proliferation of these libraries suggests that the NumPy model is getting something right. In practice, scientific computing users rely on the NumPy family of libraries e.g. NumPy, SciPy, TensorFlow, PyTorch, CuPy, JAX, etc. So, how can Python users achieve some fraction of the performance that their C++ and Java friends enjoy? Yet, they struggle to move away from Python, because of network effects, and because Python’s beginner-friendliness is appealing to scientists for whom programming is not a first language. The scientific computing community has lots of raw data they need to process, and would very much like performance. One group of users is left behind, though. Users who prioritize performance typically end up on faster compiled languages like C++ or Java. Many users of Python deprioritize performance in favor of soft benefits like ergonomics, business value, and simplicity. Obligatory disclaimer: all opinions are mine and not of my employer Tagged: software_engineering, computer_science, python