A DPU , How does that sound ?

May 11, 2008
20,055
1,290
126
I wrote this on dailytech, how does it sound ?


I hope they also improved the endurance of the flash memory. Because wear leveling and other techniques to prevent the flash memory cel from degrading fast will not be enough. I know modern operating systems already caches and schedules writes to the HDD to improve performance but still, i doubt that flash technology is improved that much. I see somebody creating a new market...

A controller with 1GB of DDram memory for write caches and flash as read cache together with a traditional HDD for bulk storage would be much better. Design the controller that it can write the data in the DDram to the HDD on it's own if the OS desires it. A sort of display list but for HDD controllers. Maybe it is time that the HDD controller get's a little smarter and more versatile. A HDD controller with it's own DRAM interface and it's own flash memory interface and the traditional but stripped down for pure speed version of a HDD interface. It would be like moving the hdd controller on the HDD back to the motherboard again. Keeping only the necessary logic / ram and analog circuits on the HDD. Away with al the overhead.

Example : The lynnfield has a pci express controller on board. A second one could be used only for the data controller connected to the HDD ,flash and dram. Or use A hyper transport connection like AMD has. Strip it down enough to gain a low latency but sufficiënt in speed for the upcoming years.

We would have a system build up of a CPU, GPU and a DPU (data processing unit) . The DPU and main memory will in the future be fused together and be the same .


I'm gathering you didn't mean for this to be a "critique my DT post thread", but one could be mistaken.

I'll ask that posters comment on the idea of a DPU instead of the quality of the OP's post as we don't want this thread getting too exciting

n7
Memory/Storage Mod
 

Soundmanred

Lifer
Oct 26, 2006
10,784
6
81
You should have asked before you posted it.
Seriously.
Terrible sentence formation.
Beginning sentences with "Because" and "Or".
Capitalizing DRAM and AMD but not DDRam, PCI, Lynnfield, etc.
What's with the umlaut in "sufficient"?
Use "coming years" instead of "upcoming years".
It comes across as far from professional or knowledgeable.
 
May 11, 2008
20,055
1,290
126
Originally posted by: Soundmanred
You should have asked before you posted it.
Seriously.
Terrible sentence formation.
Beginning sentences with "Because" and "Or".
Capitalizing DRAM and AMD but not DDRam, PCI, Lynnfield, etc.
What's with the umlaut in "sufficient"?
Use "coming years" instead of "upcoming years".
It comes across as far from professional or knowledgeable.





:disgust:
 

Soundmanred

Lifer
Oct 26, 2006
10,784
6
81
My bad, thought you wanted input on your writing!
You might take my advice anyway, it will make you look more knowledgaeble.

As for the original intent, I'm not even sure what you're trying to say, so I can't really comment.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Short answer, you have no idea what you are talking about...
Long answer, I will only do for the first few sentences...

I hope they also improved the endurance of the flash memory.
we are at the point where we are sacrificing superfluous endurance for speed.

Because wear leveling and other techniques to prevent the flash memory cel from degrading fast will not be enough.
aside from calling cell a cel, and broken sentence structure, you are simply utterly wrong. I calculated most modern MLC drives should last hundreds of years at normal usage. What makes you think they will not?
Let me give you an analogy. Lets say I buy a new honda accord, it is considered a long lasting car. I take it to the racing track, I hire of team of drivers, their task is to keep it racing 24/7, and flooring it for the entire time. How long do you think it will last? SSDs are meant to last 5+ years under THOSE conditions... that makes them last closer to 500 years under REAL usage conditions for a home buyer. Granted, some companies really NEED those kind of conditions, but a home user cannot possibly do it.

I know modern operating systems already caches and schedules writes to the HDD to improve performance but still, i doubt that flash technology is improved that much.
are you trying to say something here? the sentence is too damn broken.. as far as I can tell you assume that SSDs are "not that fast". You are wrong, I am using an intel G2 and that thing is VERY VERY FAST and greatly improves OS performance. Also every review ever written ever by professionals agrees with me.

I see somebody creating a new market...
Oh yes, they will just pull a brand new technology out of their nether regions. or do you mean the one you list in the next paragraph about 1GB cache spindle drive? if so than you should know that you structured it wrong. And the content is even more wrong. However I ran out of precious time to correct each and every one of your many errors as you seem to be making at least 4 of them per sentence.
 

yh125d

Diamond Member
Dec 23, 2006
6,907
0
76
Originally posted by: Soundmanred
You should have asked before you posted it.
Seriously.
Terrible sentence formation.
Beginning sentences with "Because" and "Or".
Capitalizing DRAM and AMD but not DDRam, PCI, Lynnfield, etc.
What's with the umlaut in "sufficient"?
Use "coming years" instead of "upcoming years".
It comes across as far from professional or knowledgeable.

Seriously.
Terrible sentence formation.
Sentences with no object
Using etc.
It comes across as snobbish.


 
May 11, 2008
20,055
1,290
126
Originally posted by: yh125d
Originally posted by: Soundmanred
You should have asked before you posted it.
Seriously.
Terrible sentence formation.
Beginning sentences with "Because" and "Or".
Capitalizing DRAM and AMD but not DDRam, PCI, Lynnfield, etc.
What's with the umlaut in "sufficient"?
Use "coming years" instead of "upcoming years".
It comes across as far from professional or knowledgeable.

Seriously.
Terrible sentence formation.
Sentences with no object
Using etc.
It comes across as snobbish.





:thumbsup:
 
May 11, 2008
20,055
1,290
126
Originally posted by: William Gaatjes
I wrote this on dailytech, how does it sound ?


I hope they also improved the endurance of the flash memory. Because wear leveling and other techniques to prevent the flash memory cel from degrading fast will not be enough. I know modern operating systems already caches and schedules writes to the HDD to improve performance but still, i doubt that flash technology is improved that much. I see somebody creating a new market...

A controller with 1GB of DDram memory for write caches and flash as read cache together with a traditional HDD for bulk storage would be much better. Design the controller that it can write the data in the DDram to the HDD on it's own if the OS desires it. A sort of display list but for HDD controllers. Maybe it is time that the HDD controller get's a little smarter and more versatile. A HDD controller with it's own DRAM interface and it's own flash memory interface and the traditional but stripped down for pure speed version of a HDD interface. It would be like moving the hdd controller on the HDD back to the motherboard again. Keeping only the necessary logic / ram and analog circuits on the HDD. Away with al the overhead.

Example : The lynnfield has a pci express controller on board. A second one could be used only for the data controller connected to the HDD ,flash and dram. Or use A hyper transport connection like AMD has. Strip it down enough to gain a low latency but sufficiënt in speed for the upcoming years.

We would have a system build up of a CPU, GPU and a DPU (data processing unit) . The DPU and main memory will in the future be fused together and be the same .


I'm gathering you didn't mean for this to be a "critique my DT post thread", but one could be mistaken.

I'll ask that posters comment on the idea of a DPU instead of the quality of the OP's post as we don't want this thread getting too exciting

n7
Memory/Storage Mod



Very true, after reading my post i agree it can be interpreted wrong...

Thank you for your edit...

I am much obliged...



 
May 11, 2008
20,055
1,290
126




we are at the point where we are sacrificing superfluous endurance for speed.
Although i understand what you are writing, i must stronlgy point out i do not agree with that vision. But then again i assume you are from the xbox360 generation.


aside from calling cell a cel, and broken sentence structure, you are simply utterly wrong. I calculated most modern MLC drives should last hundreds of years at normal usage. What makes you think they will not?
Let me give you an analogy. Lets say I buy a new honda accord, it is considered a long lasting car. I take it to the racing track, I hire of team of drivers, their task is to keep it racing 24/7, and flooring it for the entire time. How long do you think it will last? SSDs are meant to last 5+ years under THOSE conditions... that makes them last closer to 500 years under REAL usage conditions for a home buyer. Granted, some companies really NEED those kind of conditions, but a home user cannot possibly do it.

I want reliable material. Otherwise i will not buy it.


are you trying to say something here? the sentence is too damn broken.. as far as I can tell you assume that SSDs are "not that fast". You are wrong, I am using an intel G2 and that thing is VERY VERY FAST and greatly improves OS performance. Also every review ever written ever by professionals agrees with me.

You forgot to start your sentence with a capital

Afcourse SSD's are fast when compared to single device platter HDD's. But not when compared to RAM. The only bottleneck in a pc is bulk storage because it is slow.

And in your car analogy...
A 100hp car of the same weight and transmission as a 50 hp may be faster but does not mean much when compare to a 300 hp car of the same weight and transmission. Assuming everything else is ideal and the same.
or in other words... Speed ups that you mention are subjective.
I can accomplish the same with a well tuned windows installation with carefuly selected programs.

I think there is still a lot of improvement to be found in the way an OS groups writes to the external storage, and the same goes for reads. And i think the answer for improvement lies in a firm and tight cooperation between the OS and external storage as i described above. I was thinking of a memory controller and HDD controller fused together as one functional unit. What you would see then is something that is omnipresent in the embedded world. Only we want terabytes and not megabytes.. And we want gigabytes transfers and not megabytes.





See you around lord almighty















 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Although i understand what you are writing, i must stronlgy point out i do not agree with that vision. But then again i assume you are from the xbox360 generation.
What does that even mean? anyways, i dont own an xbox and never will... i am a PC gamer not a crippled DRM laden psudeo pc... and i don't own a TV by choice.

I want reliable material. Otherwise i will not buy it.
Yea, hundreds of years of operation is totally unacceptable!
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
yeah dude everyone in the SAN industry has done what you've talked about.

not uncommon to have 512GB to 2TB of ram for (READ AND WRITE) cache on a san. DDR2 no less.


are you just calling a DPU a raid controller?
 

Denithor

Diamond Member
Apr 11, 2004
6,300
23
81
Originally posted by: William Gaatjes
Afcourse SSD's are fast when compared to single device platter HDD's. But not when compared to RAM. The only bottleneck in a pc is bulk storage because it is slow.

But RAM has a nasty tendency of not retaining your data when it is powered down. Otherwise we might have a lot more sockets on motherboards and much higher capacity sticks.

I can accomplish the same with a well tuned windows installation with carefuly selected programs.

Um, no? Bootup time is much much faster with SSD - something you simply cannot replicate with a mechanical drive no matter how much you strip down the boot programs etc.

I think you need to go read the articles before posting anything else.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: Emulex
yeah dude everyone in the SAN industry has done what you've talked about.

not uncommon to have 512GB to 2TB of ram for (READ AND WRITE) cache on a san. DDR2 no less.


are you just calling a DPU a raid controller?


Yeah this guy has me a bit confused, am doing exactly what you are doing and am attempting to "map" his vernacular over to the world of discrete raid cards where we have multi-GB ram caches to buffer IO to the underlying media (be it spindle-based or flash-based)...in doing so I am not seeing anything new under the sun here, just a guy who is unawares of the existence of an entire industry beyond the "Best Buy" consumer markets when it comes to IO hardware.

Not that I mind contemplating the merits of new ideas, but this thread kinda starts of on the wrong foot IMO in that it is a tad too strongly in self-aggrandizing department with this ting of "look at me, me is smart, tell me I is smart please" vibe. I'm sure I just misunderstood the OP is all, but getting to the science of the topic I just see a raid controller with onboard cache at the heart of the topic.

I have benches I will add to this thread later (when my main rig is back up) to add data to support my (and Emulex's) point regarding cache on raid controllers and the performance impact that gives to spindle-drives.
 
May 11, 2008
20,055
1,290
126
Originally posted by: Emulex
yeah dude everyone in the SAN industry has done what you've talked about.

not uncommon to have 512GB to 2TB of ram for (READ AND WRITE) cache on a san. DDR2 no less.


are you just calling a DPU a raid controller?

Not really. I would rather see it directly tied to the processor and not through all these busses. My hope would be that we could connect the bulk storage directly to the cpu.
With the read speeds of larger dram caches approaching the speeds of main memory, We could do what we want, instantaneous access to data. If i would have to call it something it would just be a memory management unit.

I am too early...


 
May 11, 2008
20,055
1,290
126
Originally posted by: Idontcare



Not that I mind contemplating the merits of new ideas, but this thread kinda starts of on the wrong foot IMO in that it is a tad too strongly in self-aggrandizing department with this ting of "look at me, me is smart, tell me I is smart please" vibe. I'm sure I just misunderstood the OP is all, but getting to the science of the topic I just see a raid controller with onboard cache at the heart of the topic.

You sure did...

I like generating idea's. I am fortuned enough to do this on a daily basis because of my hobbies and my work. And i might not be always original ( A bit hard with 6 billion people around, some of those people a lot smarter then i am) but i like to discuss idea's. What can i say, i am a serious guy with a positive attitude.

However i notice that :

1. because of the language barrier i get interpreted wrong sometimes.
2. I mess with people sacred extensions of theirs manlyhood and this makes those people
highly agressive.

I get a lot of negative responses to my posts, while i myself was just hoping at an open and clear discussion. I do have to say, you are not one of those people.

I just think there is a tighter integration possible if the OS writers and hardware designers would work a bit more together. I know i ask a lot. It is amazing that the standards today are even standards. Anyway, I do wish that sometimes there would be a less marketing and more pure performance attitude by the designers.

The way i understand it ,the OS already caches writes, so does the raid controller and so does the HDDcontroller with native command queing for streaming writes. Now i am sure the same situation exists also for reads with the exeption of NCQ.
I see a lot of steps that can be done by a single device under direct control of the os. If you want to call this a RAID controller be my guest. I just see a MMU with a direct link to a bulk storage. It's just old techniques : everything is memory mapped.
UNIX : everything is a file but in memory. Not original, at all.
In hardware it would just be like a stripped down low latency, high bandwidth version of hypertransport or QPI or any other cpu to cpu link.


I have benches I will add to this thread later (when my main rig is back up) to add data to support my (and Emulex's) point regarding cache on raid controllers and the performance impact that gives to spindle-drives.

Please do, i am interested in the exact numbers but i already know the obvious.

 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: William Gaatjes
I just think there is a tighter integration possible if the OS writers and hardware designers would work a bit more together. I know i ask a lot. It is amazing that the standards today are even standards. Anyway, I do wish that sometimes there would be a less marketing and more pure performance attitude by the designers.

Put this as the intro to your OP and you'd have spawned a whole different attitude in the thread in my opinion, but hindsight is that way.

It's the subtleties of how you opened the OP, it comes across as "I had this stroke of genius while writing in another blog, and it was sooo genius that I just had to come here and share with you all so you could revel in my genius".

Obviously that wasn't how you meant it, am just explaining how it kind of came across in hopes that it goes some distance to help explain perhaps why the majority of responses in the thread were aimed at "squeezing the ego".

At any rate communication barriers exist even within one's own dialect, so don't get overly concerned with other's misinterpreting you unless you are really interested in figuring out how to engineer your posts so as to better avoid such misinterpretations in the future.

Gaatjes, that's Dutch, right?

On your topic, which now that you've explained it in enough different ways I think I am finely starting to comprehend the novelty of it all, really strikes at the fact that prefetchers have experienced an uneven amount of focus and implementation.

CPU's and their cpu-side of the prefetching business have enjoyed a healthy and robust development environment. But on the data-storage side of things, not so much. What we need are better prefetchers in terms of pulling data out of the hard-drive and into ram ahead of the user actually attempting to use the data that was pre-fetched.

Yeah I know this is what super-fetch and turbo-memory was supposed to accomplish, but since we are having this conversation clearly those efforts did not complete the task.

Raid cards with their cache and controllers do a fair amount more prefetching and buffering, but still they don't materially improve the reads because the pre-fetchers just aren't all that great.

The first step, as you hit on already, is the available bandwidth. Prefetchers in cpu's didn't get too aggressive until dual-channel 6GB/s memory topography's were common. We need hard-drive bandwidths that are 2-3x what the HDD itself needs for reading/writing so that an aggressive pre-fetcher doesn't starve the HDD itself of bandwidth. (just as prefetchers are disabled on 4S systems for the same reasons/issues)

But why hasn't this been done? Money. Look at the ASP for even the most expensive 2TB desktop drive. There is not money on HDD business anymore, not like a decade ago when at least your top of the line desktop HDD could fetch $800 and healthy margins.

So even if a consumer would step up and buy a much better streamlined OS and HDD package, who is going to pay for the development of such a system in advance of those first units hitting the shelves?

I own an areca 1280ML card with 2GB of cache, IIRC it cost me either $1500 or $1800. Point is that the sum total purchase price of all the drives I have hooked up to that 1280ML to date doesn't even come close to equally the amount I spent on the raid card, and I'd do it again if I had the opportunity because sticking spindle drives behind a 2GB cache really really helps mask the horrid write bandwidth and random IO capability of HDD's.
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: William Gaatjes
I have benches I will add to this thread later (when my main rig is back up) to add data to support my (and Emulex's) point regarding cache on raid controllers and the performance impact that gives to spindle-drives.

Please do, i am interested in the exact numbers but i already know the obvious.

OK here is the crystaldiskmark scores for relatively modern 1TB 7200rpm Hitachi drive hooked up to an ICH9.

Here is the crystaldiskmark scores for the same drive hooked up to an Areca 1280ML with an Intel 800MHz IOP341 I/O processor and 2GB of on-card controller cache.

(in this case the raid controller card is not being used for raid functionality at all, just a pre-processor with huge cache sitting between the system and a single slow-as-glaciers-melting spindle-drive)

Note how high those 4k random read/writes are, basically the rate limiting speed of the controller card logic and cache, compared to what folks typically see for the latest Intel G2 drives.

Now there isn't any magic sauce here, no gold at the end of the rainbow, this is what you get if you attach a $75 1TB spindle-drive to an $1800 raid-controller card...sure you are going to see nice results. But that's not the point, the point is those $300 SSD's vs. the raw-performance of a $75 1TB drive attached to an ICH9 is just a world of performance difference for the money.

Now sure the performance of the raid card is not as good as just going ramdisk, which if I understand your argument you are saying that until we see mass-storage operating at the efficiency and speed of a ramdisk then there is room and opportunity for improvement and I totally agree with that sentiment.
 

alyarb

Platinum Member
Jan 25, 2009
2,444
0
76
this person is not interested in being corrected on this matter. it's like trying to argue with nemesis (no offense to him. sometimes he is right, sometimes he is wrong, but either way it is difficult to work with him because of how he forms his thoughts in English). he shows a total lack of understanding for what the given components in a computer do. his concept of the "data processing unit" is perfect proof of this. what would the function of the DPU be, to process data? We have that, it's called a CPU. Hard drives are storage devices. We already have storage devices built with high-end CMOS processes. they're called SSDs.

and for the last time, they are fast.


will they be faster if we brought an SSD onto the die of a CPU? latency-wise, yes but not in aggregate bandwidth. however something tells me this is a whole separate can of worms.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: alyarb


will they be faster if we brought an SSD onto the die of a CPU? latency-wise, yes but not in aggregate bandwidth. however something tells me this is a whole separate can of worms.

SSD like storage is already integrated about as close as it can get for now in the embedded market. We use system on a chip where the cpu is one layer and right above/below is the NAND ram then above that is DRAM, etc. Stacking them about as close as you can get with the current technology. It isn't really done for speed as much as it is space and convenience, it is extremely fast though. Much easier to build a board when the storage, cpu, video, etc is all in one chip.

The largest system on chip storage I know of is 8GB. Going to be a long time till it catches up with SSD external drives.

 

alyarb

Platinum Member
Jan 25, 2009
2,444
0
76
yeah but you do realize that those embedded systems are small single chips and you only get the bandwidth of that one chip. how much bandwidth can you get on a serial bus directly to one chip, 60, 70 MB/s? SSD's get their bandwidth from multiple chips, not just one chip. of course the embedded latency is superbly low, but the bandwidth is not stupendously high.

do you have any transfer rate data on that 8GB SoC nand?


Not a direct analogy, but the fastest CPU caches manage less than 100 GB/s. more familiar caches in our x86 CPUs are more in the neighborhood of ~20GB/s. meanwhile video card memory buses are flirting with 200 GB/s because the bandwidth is aggregated across many modules. latency is great for small little pinches of data, such as with caches and RAM systems, but when you are trying to pump a river of data, what you want is bandwidth and that is the strategy with GDDR buses, SSDs, RAID...
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: William Gaatjes
A HDD controller with it's own DRAM interface and it's own flash memory interface and the traditional but stripped down for pure speed version of a HDD interface. It would be like moving the hdd controller on the HDD back to the motherboard again. Keeping only the necessary logic / ram and analog circuits on the HDD. Away with al the overhead.

The only thing that could be moved off the drive would be local cache and that wouldn't make enough of a difference to make sense. The HDD controller was only off the HDD back in the MFM/RLL days. It was done that way then because it was impossible to put the required hardware on a drive. It took board space.

Example : The lynnfield has a pci express controller on board. A second one could be used only for the data controller connected to the HDD ,flash and dram. Or use A hyper transport connection like AMD has. Strip it down enough to gain a low latency but sufficiënt in speed for the upcoming years.

Something has to process the instructions from the OS to the various chips on the board. Direct access to on board chips is not available on the x86 platform. In the embedded world it is done all the time, but that is a totally different architecture. If you use cpu time to get data from your proposed dpu then it would have to take less time than the cpu could do it for itself. Right now cpu and the data they need from storage are not a bottleneck for any application except copying of files and for that their are already controller cards.

Applications that need large amounts of data spend more time waiting for the cpu to process the data than they do reading it from storage.

We would have a system build up of a CPU, GPU and a DPU (data processing unit) . The DPU and main memory will in the future be fused together and be the same .

Maybe once system on a chip becomes more advanced it will be possible to put large amounts of storage on the chip itself, negating most of the issues. We are a long way away from that though.

 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: alyarb
yeah but you do realize that those embedded systems are small single chips and you only get the bandwidth of that one chip. how much bandwidth can you get on a serial bus directly to one chip, 60, 70 MB/s? SSD's get their bandwidth from multiple chips, not just one chip. of course the embedded latency is superbly low, but the bandwidth is not stupendously high.

do you have any transfer rate data on that 8GB SoC nand?


Not a direct analogy, but the fastest CPU caches manage less than 100 GB/s. more familiar caches in our x86 CPUs are more in the neighborhood of ~20GB/s. meanwhile video card memory buses are flirting with 200 GB/s because the bandwidth is aggregated across many modules. latency is great for small little pinches of data, such as with caches and RAM systems, but when you are trying to pump a river of data, what you want is bandwidth and that is the strategy with GDDR buses, SSDs, RAID...


That is the reason I said it would be a long time before we see anything like it in the x86 world. The embedded chips don't need anything on the outside to have access to the ram. The only support chips it needs is for things like USB or sound, and sometimes even that is included . Bandwidth is more than high enough for everything the chip needs because it is a totally different way of doing things.

specs on the nand chip:
interface to the nand latency is .8 ps max, .5 typical.
time to seek an address and begin transferring data is 4ps max, 1.8ps typical
Transfer rate I couldn't say as its labeled sampling, which means it just isn't completed enough for them to officially say it is a certain rate.
 
May 11, 2008
20,055
1,290
126
Originally posted by: Idontcare


Put this as the intro to your OP and you'd have spawned a whole different attitude in the thread in my opinion, but hindsight is that way.

It's the subtleties of how you opened the OP, it comes across as "I had this stroke of genius while writing in another blog, and it was sooo genius that I just had to come here and share with you all so you could revel in my genius".

Obviously that wasn't how you meant it, am just explaining how it kind of came across in hopes that it goes some distance to help explain perhaps why the majority of responses in the thread were aimed at "squeezing the ego".

At any rate communication barriers exist even within one's own dialect, so don't get overly concerned with other's misinterpreting you unless you are really interested in figuring out how to engineer your posts so as to better avoid such misinterpretations in the future.

Well, i can come on a little strong that is true. But in my defense, it is all from a good heart.




Gaatjes, that's Dutch, right?


Sort of, it is a bit funny though...


I think it is important to be tidy and fresh. Apart from the logical reasons of being clean and fresh i also found it obvious that it improves the chance of other people getting a positive first impression of me. I myself lack this feature but i found out that it means a lot to most people.

I was hanging out with friends during the friday nights when i was young and it was always believed i was the predator of ladies. This i am not and i never advocated that i was, but that a side. Since i was always the tech nerd as well, i was always the one fixing problems on the software and hardware side of the pc of everybody. Since the operating system was windows, i to was occassionally guilty of yelling a curse at the adress of Bill Gates. Since Bill and i have similair first names and i was an assumed ladies killer my nickname became william gaatjes in an email from my friends. Gaatjes is dutch yes, translated it means little holes. I don't think i have to explain that
That's where my forum name comes from .




On your topic, which now that you've explained it in enough different ways I think I am finely starting to comprehend the novelty of it all, really strikes at the fact that prefetchers have experienced an uneven amount of focus and implementation.

CPU's and their cpu-side of the prefetching business have enjoyed a healthy and robust development environment. But on the data-storage side of things, not so much. What we need are better prefetchers in terms of pulling data out of the hard-drive and into ram ahead of the user actually attempting to use the data that was pre-fetched.

Yeah I know this is what super-fetch and turbo-memory was supposed to accomplish, but since we are having this conversation clearly those efforts did not complete the task.

Raid cards with their cache and controllers do a fair amount more prefetching and buffering, but still they don't materially improve the reads because the pre-fetchers just aren't all that great.

The first step, as you hit on already, is the available bandwidth. Prefetchers in cpu's didn't get too aggressive until dual-channel 6GB/s memory topography's were common. We need hard-drive bandwidths that are 2-3x what the HDD itself needs for reading/writing so that an aggressive pre-fetcher doesn't starve the HDD itself of bandwidth. (just as prefetchers are disabled on 4S systems for the same reasons/issues)

But why hasn't this been done? Money. Look at the ASP for even the most expensive 2TB desktop drive. There is not money on HDD business anymore, not like a decade ago when at least your top of the line desktop HDD could fetch $800 and healthy margins.

So even if a consumer would step up and buy a much better streamlined OS and HDD package, who is going to pay for the development of such a system in advance of those first units hitting the shelves?

I own an areca 1280ML card with 2GB of cache, IIRC it cost me either $1500 or $1800. Point is that the sum total purchase price of all the drives I have hooked up to that 1280ML to date doesn't even come close to equally the amount I spent on the raid card, and I'd do it again if I had the opportunity because sticking spindle drives behind a 2GB cache really really helps mask the horrid write bandwidth and random IO capability of HDD's.

Very true, i agree with you on the prefetcher part. And also on the cost of the implementation. I see it like this, The OS always knows when data is needed. The OS is the candidate to deliver the information about what is handy. Now the cpu's are getting so fast and have lot's of cores. I say, do not wait untill those cores are used. I ask the designers of cpu's, HDD's and chipset's and of the operating systems :

Let's use a few million transistors to set up such a device that can handle this. It would really improve. Let the OS handle it on the software side. And use that special core to find out what to prefetch. Since there are spare cores as well, we can use some calculation power of those cores as well. The amount of cores will just be growing. And a specially designed core for a certain task, will always be faster on the same process then a general core.














 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |