Yes, if you use assembly language or a system programming language a step up from that (C or Rust for example), you say “I want this much memory” then you say “I want to put this value in this bit of memory” and so on.
JavaScript runtimes are generally written in a system programming language (C++ normally), but JS itself is a garbage collected language. You don’t manage the memory yourself*. When objects come into scope, the language runtime decides how to handle them, where to put them in memory. When objects go out of scope, the language runtime cleans up that memory.
When you declare let a = 5
, the runtime will try to store that as efficiently as possible, but how and where it’s stored in volatile memory (RAM) is going to be context-dependent (where in the code is it defined?) and runtime-dependent (how does this particular runtime deal with storing values in this particular context?).
Ideally, your let a = 5
just stores that value at a certain place in memory directly**. But there’s a load of extra stuff on top of this in JS.
It’s difficult to actually get the assembly language output of (eg) the V8 JS engine (the one that powers Chrome/Chromium/Node etc), but as an illustration, if I were to print the assembler output of something like [1,2,3,4,5].filter(v => v < 4).map(v => v * 2);
, that’s gonna be at least hundreds of Kb, possibly a few Mb. So “low level” optimising in JS is generally a bit silly: the JS engine is doing far too much under-the-hood for that to be at all sensible.
But laying stuff out logically, that does have clear effects. JS engines tend to be very fast, but this doesn’t mean they don’t have to do loads of work. Reducing the amount of work they do, that’s often not too difficult. You just need to understand what the methods you’re using are doing at a basic level (not low-level, just “what do map
/filter
do”).
So with map & filter, both of those go through an entire array, use a closure to keep hold of values in the array, create a new one with the result and a load of other things. Even creating an array requires quite a lot of work.
So if you put map
first, you get an array created that is the same size as the original one. Then that array gets fed into filter
.
But if you put filter
first, then it is likely that the array that it creates is smaller than the original one. The result of filter
then gets fed into map
, which, in turn, has to do less work. If you’ve got very big arrays, or you’re doing something that takes quite a bit of work in the filter/map functions, this will visibly make a difference.
Yes. JS engines and the computers they run on are often plenty fast enough for it to make very little difference to the human eye. There was a programmer called Joe Armstrong who had a quote re speeding up language runtimes (I forget the source so I’m paraphrasing), and he basically said “if you want your language to run faster, just wait a few years”. Computers speed up. Coupled to that, the web runs on JS, so Google et al have poured money into optimising JS engines to the nth degree. So don’t worry too much about the low-level mechanics (worry a bit though! It’s a good thing to worry about and investigate, it’s really instructive! ).
* with a caveat: you can use a strict subset of JS called WebAssembly (WASM) where the memory is managed, but that’s normally written in a, say, C or C++ or Rust and compiled to WASM code.
** & re. storing values, there’ll be a lookup table: instead of iterating through memory, to access the value the computer just checks what is at the defined slot for that memory address