- Jan 12, 2009
- 38
- 0
- 0
The Physical Hard Drive (Patent Pending)
The Physical Hard Drive Patent covers three main concepts with many subconcepts. As the name implies it is something you can hold in your hand and touch. It can be translated to binary, a key requirement of any technology. It should be considered Hard-Coding and the source cannot be rewritten without significant technological advances.
The PHD can be made any way a manufactorer or customer desires. It probably will be similar to a current hard drive however.
Any of the 3 claims can be required, be optional, or be disallowed in a potential final product. It has properties compression experts should recognize and in its own way -physically- compresses data to a higher density than existing technology does.
Encryption is a possible side benefit of the system. Since so much data is encodable in so little space it is possible to alter the ways it is written and read to achieve near impossible to decipher encryption. The difficulty rises exponantionally to the amount of data encoded in a portion.
The three main concepts are 'color', 'shape', and 'scaling'. Color will be easiest to understand and scaling the hardest.
Color 3D printers have come a long way in a few short years. They are able to work at 25 microns for full color and this is expected to further shrink in the coming years. Structures in monocolor have been printed at 10 microns per peice and this is expected to further shrink as well.
The human color spectrum, as well as other light based spectrums of colors are sub-micron sized in their wavelengths. This means that printing down to 1 micron in size will not prevent the color from being emitted.
The human eye alone can detect 10 million colors. A properly made imaging system should be able to easily surpass that. It may be a customer wants 25 million colors or that the manufacturer will only do 1026 colors. This is an engineering and cost effectiveness question and is not in my mind. I am merely the person who researched it, and then patented it. PATENT PENDING!
10 Million colors is 23 bits. For the same inch squared density a magnetic pits drive will get roughly 20 bits for the 23 bits we could color encode. This is a 15% increase in density with color from the human eye alone. Since magnetic pits density can only go so far, where colors can greatly grow this will eventually be about 30 bits to 1 bit in favor of color.
Of course other new and novel memory systems are in development including "Race Track" memory by IBM and "Memristor" memory in development by HP. Both will eventually equal or beat colors in and of itself.
However there are two more aspects to cover.
Shapes is a factor for compression and encryption in our Physical Hard Drive.
An easy to use method to describe shapes is via graph paper. Imagine erasing the barrier between 2 cells in a 10x10 box set of graph paper. We now have 98 small cells and 1 large cell. Our large cell can be aimed left-right or up-down. There is 90 places it can go in either alignment. This means if 2 colors is used (one for 1 values and one for 0 values) we have now got 98 + 6 (rounded down from 90 outcomes) bits encodable based upon where that shape is.
Thing is we can grow shapes or add shapes via the same means. Or if we can identify the shape regardless of colors we also gain more total encodable data in a same sized field.
The shapes can be made via air gapping, changing the surface of the color from smooth to certain textures, elevation differences, outlining it somehow, or some other means. Again an engineering issue not critical for the science.
Shapes can also be deemed 3D if a method is used which would be able to cleanly identify the whole shape.
Shapes can dramatically increase the storage capacity of a PHD. Combined with colors it has less total gains (in our 10x10n scenario instead of 100 bits we would have 2300 bits and one shape would be 2277 + 6 for a unequal ratio. Eventually as we tried more and varied shapes we could possibly add 5% more to the total data encoded. There is possible ways to increase this but i will make sure the current post is understood before continuing on shapes.
Scaling is the third main concept. It has two main subconcepts and a lot of minor concepts. The two concepts can be described as dimensions and "skipping". The two work exceptionally well.
Dimensions is simple if you try. Take a 10x10x10 shaped box shape of bits. This is 1000 bits total. Take the top 10 off and apply them to the side. Now we have a 9x12x10 box with some of the final column being absent. Instead we could have taken just 1 bit and applied it to the side (in up to 100 different locations)
Take 5 bits, only 5 bits, and make every possible pattern...there is more than 256 possible layouts we can do. This scales truely fast when larger in bits.
Skipping has to possible functions. First it can physically remove a peice and second it can insert an empty spot in a PHD section.
This has been a hard concept for some so I will take a moment to describe this with an analogy. Take a sheet of graph paper and now cut some sections out. If it was 10x10 we now will have less than 100 squares. The other way is to cut all 100 squares seperately and then put them together again, but sometimes leave space between them. In the 2nd example our dimensions grow but the cell count does not.
The math is simple for adding empty cells, to a certain extent we now have "trinary" instead of binary. This means we gain 50% more data saved for the whole dimensions (one should limit it so no "empty cells" go into our beyond the final row or column since this is the jurisdiction of the first aspect of scaling, dimensions).
Now reducing our counts also gives us new potential outcomes, in trinary but keeps our current dimensions.
Both work in similar manners but when combined with established "usual dimensions" this can truly lead to two vastly different outlooks.
Due to scaling we have the potential to have unusual modifications to our boundaries. This means we should use a system of subfiles instead of a full sized one. Subfiles however also allows us some play in adding information (this is a subaspect of scaling). These boundaries can be used reduce or increase size to take advantage of number plays, actual scaling issues, and in situations where a smaller size can be represented with a far smaller set of bits.
We can assume borders, numerlogically represent, physically represent, use an unused color to identify borders (one spot would work in many cases, 2 in most I presume) or otherwise provide a means to identify borders. This is both a math and an engineering question.
The method works with existing tech in some regards. Why a terabyte sized file with a single removed bit (pit) in a physical drive can mean 40 bits of info. 2 bits removed is beyond the computational abilities of my computer. This is of course hard-coded data and yes a means to identify hard coded data would be needful (software probably) but it is very likely modern drives can be made to have portions of the OS hardcoded to the hard-drive.
A record album uses grooves to indicate changes in the music, where if you skipped portions (like between songs) you gain the effect of scaling. Scaling can work with cd-roms as well. Race Track memory... remove a string or direct a series of strings sideways instead of up/down. Memristor gains by removal or filling and in theory could use colors as well.
For compression experts - this hard encoding method allows us to break the software barrier of entropy with physical representation but it still keeps the rule of the pigeon holes. Just we now can make the pigeon holes have different patterns and call this a way to encode data.
For encryption experts - this PHD pushes the change of 1 bit changes to affect as many as about 60-70 bits. Later I can show more gains for encryption which should make you understand that this system can be quite uncrackable.
Now before others say it... I know the system will be extremely slow to read and impossible to rewrite. I understand that software will be needed to read the drives. This creation may only be feasible for large firms that store excess information for years without viewing it.
This includes banks, hospitals, governments, prisons, financial firms, security companies, courts, and more.
Selling Points:
The big selling point is going to pricing. With current tech storage space might cost big entities $50 per a terabyte. My system might ultimately drop to $2 to $5 per terabyte after running about $20 a terabyte at first. (These are estimations based on plastic prices, weights of current drives, estimations on cost of writers distributed over a lot of PHD drives, reader costs over a lot of PHD drives, and predicted required outer materials of the drives)
That said the smaller physical size of the drives will require less space to store. Storing data in a vault means space is at a premium. By reducing space we save companies money.
Additionally there are companies, such as banks, which desire strong encryption on all data. My PHD allows for a tremendous amount of built in encryption (explained later).
The ability to hardcode data onto existing drives at a no size increase to the existing hardware will be a potential winning point. While technology has been spinning away from the idea of hardcoding data... in general... a backup of specific OS coding in hardcode would be very attractive.
Shelf life would be extremely long. Plastics, composites, metals, and other plausible materials can last centuries as components of the PHD. If no magnets, spin functions, or other electronic equipment is used inside the PHD then it can (excepting for some read techniques) never wear down. Think of historians using data from now some 5000 years ahead using a PHD drive as the source!
In time the read speed will dramatically increase. Moores Law (where the number of transistors will double every two years) means that GPU's and to a lesser predicted extent CPU's will eventually be able to proccess the data in speeds humans would be accostomed to now. RAM will also have a significant place in the processing and is also going to improve over time. Of course a dedicated read chip might be the fastest solution but is not neccessary for the PHD to work.
The PHD can work with "off the shelf" technologies and some software encoding. Where most new technologies require extensive Research & Development costs the PHD can be launched in less than a year.
__________*
I expect an initial version to get 150-200% data density over existing storage methods per square inch at a much reduced cost. Eventually this should exceed 500% as 3D printing improves.*
______________________
Forgive typos and grammar, on smart phone with a thumb in a brace...*
The Physical Hard Drive Patent covers three main concepts with many subconcepts. As the name implies it is something you can hold in your hand and touch. It can be translated to binary, a key requirement of any technology. It should be considered Hard-Coding and the source cannot be rewritten without significant technological advances.
The PHD can be made any way a manufactorer or customer desires. It probably will be similar to a current hard drive however.
Any of the 3 claims can be required, be optional, or be disallowed in a potential final product. It has properties compression experts should recognize and in its own way -physically- compresses data to a higher density than existing technology does.
Encryption is a possible side benefit of the system. Since so much data is encodable in so little space it is possible to alter the ways it is written and read to achieve near impossible to decipher encryption. The difficulty rises exponantionally to the amount of data encoded in a portion.
The three main concepts are 'color', 'shape', and 'scaling'. Color will be easiest to understand and scaling the hardest.
Color 3D printers have come a long way in a few short years. They are able to work at 25 microns for full color and this is expected to further shrink in the coming years. Structures in monocolor have been printed at 10 microns per peice and this is expected to further shrink as well.
The human color spectrum, as well as other light based spectrums of colors are sub-micron sized in their wavelengths. This means that printing down to 1 micron in size will not prevent the color from being emitted.
The human eye alone can detect 10 million colors. A properly made imaging system should be able to easily surpass that. It may be a customer wants 25 million colors or that the manufacturer will only do 1026 colors. This is an engineering and cost effectiveness question and is not in my mind. I am merely the person who researched it, and then patented it. PATENT PENDING!
10 Million colors is 23 bits. For the same inch squared density a magnetic pits drive will get roughly 20 bits for the 23 bits we could color encode. This is a 15% increase in density with color from the human eye alone. Since magnetic pits density can only go so far, where colors can greatly grow this will eventually be about 30 bits to 1 bit in favor of color.
Of course other new and novel memory systems are in development including "Race Track" memory by IBM and "Memristor" memory in development by HP. Both will eventually equal or beat colors in and of itself.
However there are two more aspects to cover.
Shapes is a factor for compression and encryption in our Physical Hard Drive.
An easy to use method to describe shapes is via graph paper. Imagine erasing the barrier between 2 cells in a 10x10 box set of graph paper. We now have 98 small cells and 1 large cell. Our large cell can be aimed left-right or up-down. There is 90 places it can go in either alignment. This means if 2 colors is used (one for 1 values and one for 0 values) we have now got 98 + 6 (rounded down from 90 outcomes) bits encodable based upon where that shape is.
Thing is we can grow shapes or add shapes via the same means. Or if we can identify the shape regardless of colors we also gain more total encodable data in a same sized field.
The shapes can be made via air gapping, changing the surface of the color from smooth to certain textures, elevation differences, outlining it somehow, or some other means. Again an engineering issue not critical for the science.
Shapes can also be deemed 3D if a method is used which would be able to cleanly identify the whole shape.
Shapes can dramatically increase the storage capacity of a PHD. Combined with colors it has less total gains (in our 10x10n scenario instead of 100 bits we would have 2300 bits and one shape would be 2277 + 6 for a unequal ratio. Eventually as we tried more and varied shapes we could possibly add 5% more to the total data encoded. There is possible ways to increase this but i will make sure the current post is understood before continuing on shapes.
Scaling is the third main concept. It has two main subconcepts and a lot of minor concepts. The two concepts can be described as dimensions and "skipping". The two work exceptionally well.
Dimensions is simple if you try. Take a 10x10x10 shaped box shape of bits. This is 1000 bits total. Take the top 10 off and apply them to the side. Now we have a 9x12x10 box with some of the final column being absent. Instead we could have taken just 1 bit and applied it to the side (in up to 100 different locations)
Take 5 bits, only 5 bits, and make every possible pattern...there is more than 256 possible layouts we can do. This scales truely fast when larger in bits.
Skipping has to possible functions. First it can physically remove a peice and second it can insert an empty spot in a PHD section.
This has been a hard concept for some so I will take a moment to describe this with an analogy. Take a sheet of graph paper and now cut some sections out. If it was 10x10 we now will have less than 100 squares. The other way is to cut all 100 squares seperately and then put them together again, but sometimes leave space between them. In the 2nd example our dimensions grow but the cell count does not.
The math is simple for adding empty cells, to a certain extent we now have "trinary" instead of binary. This means we gain 50% more data saved for the whole dimensions (one should limit it so no "empty cells" go into our beyond the final row or column since this is the jurisdiction of the first aspect of scaling, dimensions).
Now reducing our counts also gives us new potential outcomes, in trinary but keeps our current dimensions.
Both work in similar manners but when combined with established "usual dimensions" this can truly lead to two vastly different outlooks.
Due to scaling we have the potential to have unusual modifications to our boundaries. This means we should use a system of subfiles instead of a full sized one. Subfiles however also allows us some play in adding information (this is a subaspect of scaling). These boundaries can be used reduce or increase size to take advantage of number plays, actual scaling issues, and in situations where a smaller size can be represented with a far smaller set of bits.
We can assume borders, numerlogically represent, physically represent, use an unused color to identify borders (one spot would work in many cases, 2 in most I presume) or otherwise provide a means to identify borders. This is both a math and an engineering question.
The method works with existing tech in some regards. Why a terabyte sized file with a single removed bit (pit) in a physical drive can mean 40 bits of info. 2 bits removed is beyond the computational abilities of my computer. This is of course hard-coded data and yes a means to identify hard coded data would be needful (software probably) but it is very likely modern drives can be made to have portions of the OS hardcoded to the hard-drive.
A record album uses grooves to indicate changes in the music, where if you skipped portions (like between songs) you gain the effect of scaling. Scaling can work with cd-roms as well. Race Track memory... remove a string or direct a series of strings sideways instead of up/down. Memristor gains by removal or filling and in theory could use colors as well.
For compression experts - this hard encoding method allows us to break the software barrier of entropy with physical representation but it still keeps the rule of the pigeon holes. Just we now can make the pigeon holes have different patterns and call this a way to encode data.
For encryption experts - this PHD pushes the change of 1 bit changes to affect as many as about 60-70 bits. Later I can show more gains for encryption which should make you understand that this system can be quite uncrackable.
Now before others say it... I know the system will be extremely slow to read and impossible to rewrite. I understand that software will be needed to read the drives. This creation may only be feasible for large firms that store excess information for years without viewing it.
This includes banks, hospitals, governments, prisons, financial firms, security companies, courts, and more.
Selling Points:
The big selling point is going to pricing. With current tech storage space might cost big entities $50 per a terabyte. My system might ultimately drop to $2 to $5 per terabyte after running about $20 a terabyte at first. (These are estimations based on plastic prices, weights of current drives, estimations on cost of writers distributed over a lot of PHD drives, reader costs over a lot of PHD drives, and predicted required outer materials of the drives)
That said the smaller physical size of the drives will require less space to store. Storing data in a vault means space is at a premium. By reducing space we save companies money.
Additionally there are companies, such as banks, which desire strong encryption on all data. My PHD allows for a tremendous amount of built in encryption (explained later).
The ability to hardcode data onto existing drives at a no size increase to the existing hardware will be a potential winning point. While technology has been spinning away from the idea of hardcoding data... in general... a backup of specific OS coding in hardcode would be very attractive.
Shelf life would be extremely long. Plastics, composites, metals, and other plausible materials can last centuries as components of the PHD. If no magnets, spin functions, or other electronic equipment is used inside the PHD then it can (excepting for some read techniques) never wear down. Think of historians using data from now some 5000 years ahead using a PHD drive as the source!
In time the read speed will dramatically increase. Moores Law (where the number of transistors will double every two years) means that GPU's and to a lesser predicted extent CPU's will eventually be able to proccess the data in speeds humans would be accostomed to now. RAM will also have a significant place in the processing and is also going to improve over time. Of course a dedicated read chip might be the fastest solution but is not neccessary for the PHD to work.
The PHD can work with "off the shelf" technologies and some software encoding. Where most new technologies require extensive Research & Development costs the PHD can be launched in less than a year.
__________*
I expect an initial version to get 150-200% data density over existing storage methods per square inch at a much reduced cost. Eventually this should exceed 500% as 3D printing improves.*
______________________
Forgive typos and grammar, on smart phone with a thumb in a brace...*