Some people are convinced that from-imports in python are inherently faster than module imports. However, this isn’t necessarily true.
The usual example is something like:
import math def f(x): return math.cos(x)
versus:
from math import cos def f(x): return cos(x)
The difference between these code snippets is that in the first one, the lookup of “cos” in the namespace of “math” happens when the function “f” is executed, whereas in the second snippet the lookup happens up-front. So the second snippet can be faster if f(x) is invoked many times, but it won’t be faster if f(x) is invoked only once, and will actually be slower if f(x) was never called (since “cos” will be looked up for nothing at import). The confusion – that from-import is always faster – happens when people time only the invocation of f(x) and ignore the time taken by the import statement.
Another factor to take into account is that the extra lookup cost is only significant if the namespace of “math” is considerably larger than the local one. Let’s assume “math” namespace contains 100 items, so f(x) in the first snippet involves looking through a dict of size 100, whereas in the second snippet, cos is added to the local namespace, which then has only 2 elements (“f” and “cos”).
Suppose however that we have 50 functions that need 50 different items from “math”; now our local namespace dict has 100 items, impacting the look-up of both “f” and “cos” on each invocation of f(x) – in addition to the 50 up-front look-ups needed on import! Whereas if we import math as module, our local namespace still has just 51 items. It should be clear that cluttering our import section (and hence our namespace) with a huge number of from-imports doesn’t necessarily help with performance.