Ok, I'll bite.
First off, lets talk about memory usage and modern languages. Particularly the pros and cons of a GC
GCed language's main goal in most instances are to reduce the time spent allocating and managing memory. The process of getting memory from the OS is an expensive one because the OS is also running its own, effectively, gc algorithm to try and figure out which blocks of memory it can give up. As such, GCed languages will tend towards taking up more memory rather than less. Not generally "gigabytes" like you claim, but certainly hundreds of megabytes isn't unusual if it hasn't been bounded.
They do this because they want to spend less time garbage collecting and more time actually running the application. Garbage collection, counter intuitively, becomes more efficient when dealing with larger amounts of garbage.
Certainly Java isn't the language for watches and microwaves, they don't have the hardware or memory that would make a GCed language a good choice, but when you talk about running on servers with hundreds of gigs of ram, Why not? Even with consumer desktops and laptops it is becoming more and more common to see large amounts (8->16gb) of memory available for relatively cheap.
The same rules apply to javascript and its GC. While it has different targets that it is looking at (low pause time rather than high throughput). The game is the same. Larger heaps = less time GCing and more time doing useful work for the user. It also doesn't help that browsers are getting more and more content per page pushed onto them. That content needs to live somewhere.
Now, if we want to talk performance. GCed languages have an interesting property. (Depending on the GC algorithm, but true for most) they can allocate heap memory WAY faster than non GCed languages can. Even with things like JEMalloc. They can do this because memory allocation in most GCs is literally just adding a number to an integer and checking if it overflows the region they are allocating. Very little time. So while they consume more memory, they make applications which churn through loads of memory much quicker. In a non-gced language you end up having to do your own form of GC if you want to get some of the same benefits.
Next up, lets talk about what assembly actually teaches you.
Well it teaches you how the computer architecture works? right? Nope! Not at all. In fact, on modern CPUs the instruction set is an abstract over what actually happens in the processor. All of the assembly instructions are themselves transformed into micro-opts which you can't control or affect.
It teaches you how the compiler works? No! Not at all. No compiler is written in assembly and most of the work that compilers do happens WAY before they get to the point of turning code into assembly. In fact, assembly is so bad for a compiler that they avoid even thinking about the output assembly until the very final stage of compilation. Both the GCC and LLVM (and I believe MSVC) have an intermediate byte code which looks nothing like any computer's instruction set (well real, the Mill architecture comes close to it). The most you can say learning assembly will teach you is how to read the end result of a compiler. But even then, the optimizations that compilers do often make assembly much harder to read (even though it is often much faster than what you could hand write).
It makes you capable of understanding all of computer science? Again, no. Assembly makes you capable of understanding assembly. However, many concepts of computer science transcend what you can glean from assembly. Types? Not in assembly. Inheritance? nope. Generics? nadda. Datastructures? A big zilch. Higher kinded types? Nope. You can do and make some of these things in assembly, but learning assembly will not make these concepts easier. In fact, it will mostly only hinder the understanding of them by getting in the way. Mastery of assembly makes you capable of mastering assembly.
Assembly makes it possible to pick up higher level languages? Only if that language is C. How would assembly help you understand Haskell or Lisp, for example. The same argument has been made about C and I think it is simply false. Knowing a lower level language doesn't make it easier to pick up a higher level language. If anything, I would say the opposite is more true. Higher level languages often include all of the concepts of lower level languages and most of the advanced techniques in lower level languages are approximations of what is available in the higher level languages. (the exception here is that higher level languages often have automatic memory management and they lack the concept of pointers. Those are two things you need to learn when you go down to lower level languages).
At the end of the day, the use case for assembly is getting access to instructions on your architecture that aren't exposed to the language you're working in. Beyond that, there just isn't much benefit to knowing assembly over C. Particularly because as you go up into higher level languages it becomes impossible to actually know what is going on. JITs can do impressive things and so being able to reason about what is actually happening is nearly impossible.
I will say that understanding computer architecture, OS, and JIT design is important. Because people often make bad assumptions about what is fast and what is slow when they base it solely on what they learn in CS classes. However, learning assembly won't help you because it is often surprising to find out which instructions are actually costly and which ones are cheap.