Is Python popular *because* it is slow?
I’ve been using Python with numeric/numarray/numpy and SWIG since the spring of 2002 for data analysis. The first application I used it for is the OOF holography software. The transition to Python was from tcl with hand-written bindings to C/C++ code.
It was already a greatly productive tool then, and it has become vastly more productive since. Python has also become very popular, maybe becoming the most popular programming language. Why?
Python is relatively slow
Or rather CPython is relatively slow. There is plenty written on the topic already. Perhaps it could be made (much?) faster but at the present the performance of pure Python for most algorithms is far slower than C, C++, Java, C# , Haskell, LISP and many other languages which have optimising compilers and multithreading capabilities.
Slowness was (and is?) good for popularity of Python
This seems counter-intuitive but I think it can be justified with a bit of historical reflection.
When C++ and Java came along, many people and much effort was spent implementing essential algorithms (such as numerical libraries, sorting etc) in them with reasonably efficiency. It was hard, but not an obviously impossible task, so people tried hard.
In Python the task is hopeless – it is plainly obvious that one can not write competitive algorithms in pure Python. So all users, including this one, focused instead on interfacing to existing compiled codes. This had two positive effects:
-
There is an enforced modularity to every code base: the details of algorithms are hidden in the compiled code and one just think about the interfaces to them
-
It greatly encouraged re-use of existing software that usually worked absolutely fine, just was not so convenient to experiment and interact with.
This resulted in rather more actual work got done! And this apparent productivity of the Python environment became I think the driving force behind its popularity.
Current picture
Although CPython is relatively slow it is possible to write useful large applications in it that are much faster then even a carefully written similar application in a compiled language. For example I’ve been using PyTorch to write numerical processing (not machine learning!) algorithms with performance that would take me a very long time to match in C/CUDA.
How has that come about? Because of its intrinsic slowness, developers using Python have focused on providing the right high-level abstractions for solving common classes of problems (rather than trying to solve general problems efficiently). In case of PyTorch it captures the required graph of operations on large matrices from the Python layer and then executes it efficiently using carefully constructed compile core code. The possibilities for the programmer are restricted but that sometimes is not a negative!