I'm trying to answer his question, which is probably incorrect but still fun.
You are trying to explain why the question is incorrect, which is probably correct but nonetheless boring.
By the way, F = ma and F = k*m1*m2/r^2 are different things. If you are allowed to change the world at will...
The inertia is not the same as mass or gravity.
Mass is the property of an object.
Inertia is the ability of the object to keep moving (at whatever speed, possibly zero).
Gravity is the ability of two objects to attract each other.
If you drop an object and there is no inertia, the obejct...
Lots of fun.
No brakes in cars, except for parking. All stopping is by wind resistance as soon as engine stops. Cannot remove the parking brake because cars will get blown around by the wind.
No internal combustion engine because flyweels do not work. Probably no jet as well? Looks like...
Vesper8,
If the small partition was before the large one, then expanding the large partition involves either renumbering all clusters or moving all the data. If either of these processes is interrupted midway, the result is not easily recoverable.
At this point, the best looking course of...
Buffered writes in RAM are not delayed for long, because of the risk of loss of power or hard crash. Buffered writes on SSD can be delayed as long as there is a free space on SSD.
We have a QNAP unit and we once had a real (not simulated) hard drive failure with it. The firmware handled that less than impressive. Next time we probably go with our own build.
Still, it would be useful to know the nature of the problem. Might be that the drive is needed to be imaged first, if there is a mechanical problem involved.
Scrap it, we need something less sinister.
Say, create a regular file filled with a compressible pattern.
Write a similar pattern to that file, measure speed. This is an uncompressed sample.
Now, compress the file. It will inevitably fragment.
Write a similar pattern again, that is...
We need a benchmark then. That would be interesting to develop.
Initial condition - an unfragmented file compressed, say, 16:14. This is easy to create.
Test 1 - write the data with exactly the same compression ratio over the original file, say, random 4K writes. The file will be...
Just checked and I happen to have two disk image files (VMWare) about 2GB each with 8,000+ disjoint fragments each on this machine. Not as obscene as 10,000, but still quite good. Plus more than 10 files with 1,000+ fragments, including an Outlook PST database. All of these are compressed. The...
Nothinman,
in practice, the thing just gets damn slow if there are 10,000 fragments of the file and you happen to need a full read. This does not affect the overall performance significantly, but certain operations (like a full-text search in a large email base) make you wonder "what is it...
There is one more assumption involved, about the uniform distribution of failures. The calculation assumes that the fact of encountering the first read error does not change the probability to encounter the next one, and also that the probability of read errors does not change over time.
Typically, a compression unit on NTFS would be 16 clusters, 64KB. So the system has to recompress 64K even if you write one byte. There is another problem, more important, that compression coupled with random writes induces bad fragmentation.
The compression unit is 16 clusters. If the original...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.