As mentioned above, a single precision floating point value is defined by the IEEE 754 as a 32-bit value that can be manipulated with a specific set of operations, such as multiplication, addition, etc.
In GPU specifications, single precision floating point operations per second is commonly referred to with the acronym FLOPs (floating point operations per second), or commonly GFLOPs (giga floating point operations per second), and is a measurement of throughput or operations per second (1GFLOPs is equivalent to 1 billion FLOPs).
Many GPU applications are limited by this throughput, which is why it is often quoted in specifications. The idea is that more FLOPs can roughly indicate better performance.
It is important to not read too much into this though. FLOPs are typically specified as a theoretical limit, rather than a rate that most or even any applications should expect to achieve. It is possible for a GPU to have a higher theoretical limit, but actually achieve less on average.
Furthermore, there are many other factors (literally hundreds) that affect performance, including the characteristics of the application itself.