Your description is rather vague, and I'm a simple man. Can you lay out what you did more simply so I can understand?
You throw around that term resize like it's crystal clear, but I think you are conveniently leaving out the details.
You also realize that it's possible that by resizing something, destruction of information occurs? *ESPECIALLY* if it's an uneven multiple you are resizing to. So what exactly do you mean? You seem to describe three separate resizing events, right? What do you mean there, by how you resized it, then resized it back, then doubled in size? Whah?
Sorry, I'll try to make that post more straightforward.
http://www.leesaxon.com/forums/Anandtech-Upsampling.jpg
If you captured this image at 600x800, you would get the image on the left. If you captured this image at 300x400 and scaled it 2:1 (or "used 4 pixels to display 1" as you put it, which is the same thing), you would get the image on the right. The rest of the jargon was just explaining how I simulated this and why it was an accurate (not extra-degraded, as you'd theorized) representation of the 2:1 scaling process.
4K TVs must run 2:1 scaling on 24-30 frames each second. However, the algorithm I used in my example would take more than 1/30th second to run a 1080p frame. Therefore, 4K TVs must use a lower quality algorithm and results would actually be a little
worse than my example.
Given typical television viewing distances and that most material won't have the small detail of my example above, results will likely be "good enough" for most people who aren't cinematographers or photographers. Nonetheless it's worth knowing, especially since we're on a gearhead/tech forum, that "perfect scaling" isn't actually a thing.