Why do you need multiple exposures? Doesn't one long exposure contain all darker exposure values intrinsically? So figure out what the highest exposure should be, take that image, then just integrate less time to get the darker exposures? I could perhaps see that each pixel might saturate at some accumulated light value, but couldn't you then just read a differential say every 10 nanoseconds and then just cumulatively sum each pixel vector to achieve the desired effect. Perhaps reading the sensor isn't fast enough yet, but that seems like something that could easily be achieved using smaller sensors or just putting effort into making faster read out.