Both of those things can be done on the GPU.
A recent update to the UE4 dev tools called Niagara is geared more towards GPU particles than the old Cascade tool.
Yeah, I know that they can run on GPU, but depending on the game and setup, it may not be practical. For instance, most games are going to be GPU limited on PCs, and may leave the GPU with little wiggle room for processing extra duties like physics (especially destruction) and particle effects.
But the biggest reason I think is because of PCs with discrete graphics cards. If the GPU has to calculate physics, it can introduce more latency as the processing has to be synchronized with the game thread, which is run by the CPU. All that back and forth across the PCIe bus between the CPU and GPU can end up lowering performance and introduce stuttering and lag. This was a huge problem when I played Borderlands 2 with PhysX on a dedicated PhysX card.
I used to be a huge proponent of GPU PhysX back in the day. For several years I even used a dedicated PhysX card in my systems, along with SLI. But when NVidia and Havok started to optimize their CPU physics algorithms for multicore and SIMD, I'm now more of an opponent of GPU game physics. I remember when Ageia's PPU first made possible dynamic cloth simulation in games, as no CPU at the time was powerful enough to do the calculations without a serious hit to performance. And when NVidia bought the tech and ported it over to their GPUs, they improved on it significantly.
But today, cloth simulation runs
extremely well on modern CPUs due to multicore and SIMD optimizations. In fact, I've read that it's more efficient and performant to run cloth simulation on the CPU rather than on the GPU. Also, Epic is in the process of switching their game engine over from NVidia's PhysX to their brand new Chaos physics engine, which Intel has helped to develop with its
Intel® Implicit SPMD Program Compiler technology. The former can utilize the GPU as we all know, but Chaos runs only on the CPU and is highly optimized for today's multicore/multithreaded CPUs with wide vector SIMD. Judging by the demo, it looks to be very impressive:
Intel news announcement
So with CPUs getting more and more cores and threads and wider vectors, it makes perfect sense to focus on software physics rather than burdening the GPU even more if you ask me.