High end monitors for professional use (e.g. graphic design, publishing, medical) do use 30 bit color (i.e. 10 bits for R, G and B).
The difference in performance between these and regular monitors is very difficult to spot, but there can be slight increases in banding. The issue becomes more apparent with "wide gamut" monitors.
A "wide gamut" monitor can display a much richer selection of colors, but at 8 bit, it still only has 16.7 to choose from; as a result, individual shades are further apart, and banding tends to be a lot more visible. This can be very objectionable if you are doing software (or GPU) color correction to emulate a narrow gamut, such as sRGB.
The real limiting factor is lack of software support, most OSs don't really offer any useful "30 bit" support. Most 30 bit cards basically use driver tricks. The card runs in a 30 bit mode, but emulates a 24 bit mode for the OS - so when the OS sends data to the card, the hardware translates it to 30 bit mode, and vice versa. However, when applications use hardware accelerated functions, the card actually renders in 30 bit direct to VRAM.
This type of trickery tends to be restricted to pro-level cards, and it requires specific software support in the application.
Finally, it is important to distinguish between the panel color resolution, and the display input resolution.
The lowest end panels typically have 6 bit digital-to-analog converters. As this gives a very obviously unsatisfactory picture, many such displays use temporal dithering (also called 3D dithering or FRC) to simulate near 8 bit performance.
Some panels are 8 bit, and the display may or may not use FRC to provide near 10-bit performance. Higher end panels may be 10 or 12 bit natively, and FRC may be used to emulate even higher color resolutions. Certainly, in the very high end market, I've come across displays with 12 bit panels using FRC to achieve a "14 bit" response.
But what is the point of these super-high native color resolutions for a panel, if the monitor and computer only support 8, or maybe 10 bit? The answer is for calibration. You cannot calibrate more precisely than one "step" in the panel response. By having finer panel response, you can more precisely match each input to the desired pixel brightness.