If I remember my analog circuits properly, transfering over a copper wire is definitely not 100% efficient. The longer the wire, the less efficient the transfer due to the resistance of the copper. Now, most ISPs have their own internal networks on optical connections of some sort (which have much better efficiency and throughput). but there's always miles of copper wire that go from them to your house. There's also the connection from your ISP to the rest of the world, and this will vary a lot depending on where in the rest of the world you happen to be trying to contact due to distance and much extra latency is created by the material you're transfering over. Ideally, with current technology, we would be stringing out optical connections from point to point to minimize the latency of the transfer, but optical networks are TERRIBLY expensive. So, unless we can come up with something that's practical from a cost point of view, we're not going to be doing much better than the copper connections we have right now. All this, and we're only taking into account the mere latency of sending the information over the physical connection.
We also have to take into account the fact that processing needs to be done on this information at countless internet gateways, routers, and other nodes. Although individually, these events do not take all that long, they all add up to become a noticably delay. This is where bandwidth comes into play. Ethernet works on a protocol called CSMA/CD. I will not bother to explain the details of how this works, because it's easy enough to look up, but it is a very simple algorithm whose efficiency becomes MISERABLE at over 50% utilization. What this means is that if you have a relatively heavily saturated network, you're delays caused by the very way your ethernet protocol is working are crippling. Add this to the way TCP/IP works (every time you send a packet, you need to get an acknowledgement from the receiver that they have received the packet), and you have a lot of places where you can get screwed on latency. This is why your download speeds get killed when you are using up most of your upload. Because you can't send out acknowledgements to the sender that you have received their packets very quickly, they spend a lot of time waiting to hear from you before sending you more packets. All these things I mentioned here can be improved on. The nodes can be made faster and more efficient (algorithm-wise), the ethernet protocol can be made MUCH more intelligent, and TCP/IP can be modified so that it will do less waiting and more accepting of subsequent packets. Now, with this said, making these changes would be a HUGE project and terribly difficult to switch over to due to compatibility and money, but I'm just giving options at this point.
So, if you take into account just these things that I have mentioned (and I assure you there's plenty more that I haven't thought of off the top of my head), there's a LOT of places where we can improve on to minimize latency over the internet. Whether it happens anytime soon is a different story...;-)