Physical Hard Drive - PHD - a new technology!

TheEinstein

Member
Jan 12, 2009
38
0
0
The Physical Hard Drive (Patent Pending)

The Physical Hard Drive Patent covers three main concepts with many subconcepts. As the name implies it is something you can hold in your hand and touch. It can be translated to binary, a key requirement of any technology. It should be considered Hard-Coding and the source cannot be rewritten without significant technological advances.

The PHD can be made any way a manufactorer or customer desires. It probably will be similar to a current hard drive however.

Any of the 3 claims can be required, be optional, or be disallowed in a potential final product. It has properties compression experts should recognize and in its own way -physically- compresses data to a higher density than existing technology does.

Encryption is a possible side benefit of the system. Since so much data is encodable in so little space it is possible to alter the ways it is written and read to achieve near impossible to decipher encryption. The difficulty rises exponantionally to the amount of data encoded in a portion.

The three main concepts are 'color', 'shape', and 'scaling'. Color will be easiest to understand and scaling the hardest.


Color 3D printers have come a long way in a few short years. They are able to work at 25 microns for full color and this is expected to further shrink in the coming years. Structures in monocolor have been printed at 10 microns per peice and this is expected to further shrink as well.

The human color spectrum, as well as other light based spectrums of colors are sub-micron sized in their wavelengths. This means that printing down to 1 micron in size will not prevent the color from being emitted.

The human eye alone can detect 10 million colors. A properly made imaging system should be able to easily surpass that. It may be a customer wants 25 million colors or that the manufacturer will only do 1026 colors. This is an engineering and cost effectiveness question and is not in my mind. I am merely the person who researched it, and then patented it. PATENT PENDING!

10 Million colors is 23 bits. For the same inch squared density a magnetic pits drive will get roughly 20 bits for the 23 bits we could color encode. This is a 15% increase in density with color from the human eye alone. Since magnetic pits density can only go so far, where colors can greatly grow this will eventually be about 30 bits to 1 bit in favor of color.

Of course other new and novel memory systems are in development including "Race Track" memory by IBM and "Memristor" memory in development by HP. Both will eventually equal or beat colors in and of itself.

However there are two more aspects to cover.


Shapes is a factor for compression and encryption in our Physical Hard Drive.

An easy to use method to describe shapes is via graph paper. Imagine erasing the barrier between 2 cells in a 10x10 box set of graph paper. We now have 98 small cells and 1 large cell. Our large cell can be aimed left-right or up-down. There is 90 places it can go in either alignment. This means if 2 colors is used (one for 1 values and one for 0 values) we have now got 98 + 6 (rounded down from 90 outcomes) bits encodable based upon where that shape is.

Thing is we can grow shapes or add shapes via the same means. Or if we can identify the shape regardless of colors we also gain more total encodable data in a same sized field.

The shapes can be made via air gapping, changing the surface of the color from smooth to certain textures, elevation differences, outlining it somehow, or some other means. Again an engineering issue not critical for the science.

Shapes can also be deemed 3D if a method is used which would be able to cleanly identify the whole shape.

Shapes can dramatically increase the storage capacity of a PHD. Combined with colors it has less total gains (in our 10x10n scenario instead of 100 bits we would have 2300 bits and one shape would be 2277 + 6 for a unequal ratio. Eventually as we tried more and varied shapes we could possibly add 5% more to the total data encoded. There is possible ways to increase this but i will make sure the current post is understood before continuing on shapes.


Scaling is the third main concept. It has two main subconcepts and a lot of minor concepts. The two concepts can be described as dimensions and "skipping". The two work exceptionally well.

Dimensions is simple if you try. Take a 10x10x10 shaped box shape of bits. This is 1000 bits total. Take the top 10 off and apply them to the side. Now we have a 9x12x10 box with some of the final column being absent. Instead we could have taken just 1 bit and applied it to the side (in up to 100 different locations)

Take 5 bits, only 5 bits, and make every possible pattern...there is more than 256 possible layouts we can do. This scales truely fast when larger in bits.

Skipping has to possible functions. First it can physically remove a peice and second it can insert an empty spot in a PHD section.

This has been a hard concept for some so I will take a moment to describe this with an analogy. Take a sheet of graph paper and now cut some sections out. If it was 10x10 we now will have less than 100 squares. The other way is to cut all 100 squares seperately and then put them together again, but sometimes leave space between them. In the 2nd example our dimensions grow but the cell count does not.

The math is simple for adding empty cells, to a certain extent we now have "trinary" instead of binary. This means we gain 50% more data saved for the whole dimensions (one should limit it so no "empty cells" go into our beyond the final row or column since this is the jurisdiction of the first aspect of scaling, dimensions).

Now reducing our counts also gives us new potential outcomes, in trinary but keeps our current dimensions.

Both work in similar manners but when combined with established "usual dimensions" this can truly lead to two vastly different outlooks.

Due to scaling we have the potential to have unusual modifications to our boundaries. This means we should use a system of subfiles instead of a full sized one. Subfiles however also allows us some play in adding information (this is a subaspect of scaling). These boundaries can be used reduce or increase size to take advantage of number plays, actual scaling issues, and in situations where a smaller size can be represented with a far smaller set of bits.

We can assume borders, numerlogically represent, physically represent, use an unused color to identify borders (one spot would work in many cases, 2 in most I presume) or otherwise provide a means to identify borders. This is both a math and an engineering question.


The method works with existing tech in some regards. Why a terabyte sized file with a single removed bit (pit) in a physical drive can mean 40 bits of info. 2 bits removed is beyond the computational abilities of my computer. This is of course hard-coded data and yes a means to identify hard coded data would be needful (software probably) but it is very likely modern drives can be made to have portions of the OS hardcoded to the hard-drive.

A record album uses grooves to indicate changes in the music, where if you skipped portions (like between songs) you gain the effect of scaling. Scaling can work with cd-roms as well. Race Track memory... remove a string or direct a series of strings sideways instead of up/down. Memristor gains by removal or filling and in theory could use colors as well.



For compression experts - this hard encoding method allows us to break the software barrier of entropy with physical representation but it still keeps the rule of the pigeon holes. Just we now can make the pigeon holes have different patterns and call this a way to encode data.


For encryption experts - this PHD pushes the change of 1 bit changes to affect as many as about 60-70 bits. Later I can show more gains for encryption which should make you understand that this system can be quite uncrackable.


Now before others say it... I know the system will be extremely slow to read and impossible to rewrite. I understand that software will be needed to read the drives. This creation may only be feasible for large firms that store excess information for years without viewing it.

This includes banks, hospitals, governments, prisons, financial firms, security companies, courts, and more.


Selling Points:
The big selling point is going to pricing. With current tech storage space might cost big entities $50 per a terabyte. My system might ultimately drop to $2 to $5 per terabyte after running about $20 a terabyte at first. (These are estimations based on plastic prices, weights of current drives, estimations on cost of writers distributed over a lot of PHD drives, reader costs over a lot of PHD drives, and predicted required outer materials of the drives)

That said the smaller physical size of the drives will require less space to store. Storing data in a vault means space is at a premium. By reducing space we save companies money.

Additionally there are companies, such as banks, which desire strong encryption on all data. My PHD allows for a tremendous amount of built in encryption (explained later).

The ability to hardcode data onto existing drives at a no size increase to the existing hardware will be a potential winning point. While technology has been spinning away from the idea of hardcoding data... in general... a backup of specific OS coding in hardcode would be very attractive.

Shelf life would be extremely long. Plastics, composites, metals, and other plausible materials can last centuries as components of the PHD. If no magnets, spin functions, or other electronic equipment is used inside the PHD then it can (excepting for some read techniques) never wear down. Think of historians using data from now some 5000 years ahead using a PHD drive as the source!

In time the read speed will dramatically increase. Moores Law (where the number of transistors will double every two years) means that GPU's and to a lesser predicted extent CPU's will eventually be able to proccess the data in speeds humans would be accostomed to now. RAM will also have a significant place in the processing and is also going to improve over time. Of course a dedicated read chip might be the fastest solution but is not neccessary for the PHD to work.

The PHD can work with "off the shelf" technologies and some software encoding. Where most new technologies require extensive Research & Development costs the PHD can be launched in less than a year.


__________*

I expect an initial version to get 150-200% data density over existing storage methods per square inch at a much reduced cost. Eventually this should exceed 500% as 3D printing improves.*


______________________
Forgive typos and grammar, on smart phone with a thumb in a brace...*
 

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
There are a lot of wild assumptions here.

1) Color information cannot be reliably read back to 30 bits of precision. I would argue, if you want accuracy, you can only read to about 8-10 bits of precision.

2) If you want "shapes", you give up linear density. You can do maximum linear density OR shapes, but you can't do both.

3) There are at least a dozen MASSIVE "claims" with no substantiation. With each one of these, I say "huh?" and/or "bullshit, prove it".

It has properties compression experts should recognize and in its own way -physically- compresses data to a higher density than existing technology does.
huh?

For the same inch squared density a magnetic pits drive will get roughly 20 bits for the 23 bits we could color encode.
Prove it. Show us the math. I don't think you can get the kind of density you see with a modern PMR magnetic drive or solid state flash arrays. A modern drive platter stores well above 250Gbit per square inch. Print that with an inkjet. I dare you.

This means if 2 colors is used (one for 1 values and one for 0 values) we have now got 98 + 6 (rounded down from 90 outcomes) bits encodable based upon where that shape is.
Why waste density by putting borders between data cells? How does that make any sense? You cut your density in half to make shapes that can double your storage. How are you encoding this? Is it a hash table concept? What does the hash table mean?

The shapes can be made via air gapping, changing the surface of the color from smooth to certain textures, elevation differences, outlining it somehow, or some other means. Again an engineering issue not critical for the science.
You are trying to maximize density, leaving a "space" for an "outline" tells me that you're thinking of this as a big sheet of graph paper, not micro-scale encoding of information using 30um dots as you said in a previous paragraph.

Eventually as we tried more and varied shapes we could possibly add 5% more to the total data encoded.
Please present the mathematical derivation of this 5% number. Made up numbers make your idea seem absurd.

Dimensions is simple if you try. Take a 10x10x10 shaped box shape of bits. This is 1000 bits total. Take the top 10 off and apply them to the side. Now we have a 9x12x10 box with some of the final column being absent. Instead we could have taken just 1 bit and applied it to the side (in up to 100 different locations)

This is absurd. Presuming the every space is intended to write information (maximum density), then simply moving bits around (and leaving the empty space for them to move into) drastically reduces bit density without any gaining back the loss through your "dimensions" idea.

Second, 10x10x10 implies this is 3d (as in, a cube, not a square). a 10x10 box printed on plastic is 100 bits total. If you leave a "gutter" around the outside of this grid, you are actually using a 12x12 grid and only filling in the middle 10x10. Presumably, you can increase the data stored by 144 bits by changing "shapes" in these 12 columns.

But if you simply just use those 12 columns for bitwise data, you also get 144 bits, without the complexity of encoding hashes for identifying all these "shapes". But the idea isn't novel, either. There is a bit of research on using "shapes" for compression. I don't have any handy, but I think the mathematics of it indicates that using decent compression on the data itself results in the "shapes" being of marginal value to data compression.

Skipping has to possible functions. First it can physically remove a peice (sic) and second it can insert an empty spot in a PHD section.

Why does removing bitwise information increase density somehow? Doesn't follow.

The math is simple for adding empty cells, to a certain extent we now have "trinary" instead of binary.

Are you saying that if you have two gaps, you simply leave one blank and you have 3 possible levels of information encoded in those two spaces? "Trinary"? But if you simply used both spaces for binary data, you would encoded FOUR possible levels of information. So why do it?



Both work in similar manners but when combined with established "usual dimensions" this can truly lead to two vastly different outlooks.

I can't parse this sentence. It doesn't seem to have any information it it.

Due to scaling we have the potential to have unusual modifications to our boundaries.

No it doesn't. You are thinking of this as a big giant bit of graph paper.

I want you to think of it as discreet pixels (because it will be at the appropriate scale to make it have useful density). You have something like 30um spaces. You either put a dot there or you don't. You don't get to arbitrarily modify those gaps, they're the smallest unit that you have to work with.

The method works with existing tech in some regards.
really? Please show me the tech.

Why a terabyte sized file with a single removed bit (pit) in a physical drive can mean 40 bits of info. 2 bits removed is beyond the computational abilities of my computer.
What? Did you overflow Windows calculator? Your computer can calculate the number of atoms on Earth, don't claim that you overflowed it in order to get out of following your claims. And what are you talking about here, anyway?

A record album uses grooves to indicate changes in the music, where if you skipped portions (like between songs) you gain the effect of scaling.
What? skipping a portion of a record does not increase storage density.



For compression experts - this hard encoding method allows us to break the software barrier of entropy with physical representation but it still keeps the rule of the pigeon holes.
The "rule of the pigeon holes"? What is that? Did you make that up? When I google this phrase, the ONLY RESULT ON THE INTERNET is this post.

Just we now can make the pigeon holes have different patterns and call this a way to encode data.
You can call it anything you like, but that doesn't make it work.


For encryption experts - this PHD pushes the change of 1 bit changes to affect as many as about 60-70 bits.
This has nothing to do with encryption, but does underscore how error prone your method of trying to encode data into colors and shapes would be.

Later I can show more gains for encryption which should make you understand that this system can be quite uncrackable.

I seriously doubt this claim. There is a lot of complex discreet math around encryption. Claiming something "uncrackable" requires extraordinary proof and a solid understanding of the mathematics and statistical approach behind cryptography. I'm not convinced you have either.


Now before others say it... I know the system will be extremely slow to read and impossible to rewrite.
No actually, these are really minor problems in your scheme. I think they are the least of your issues.

That said the smaller physical size of the drives will require less space to store. Storing data in a vault means space is at a premium. By reducing space we save companies money.

Current storage beats 250Gbit per square inch. Are you claiming to beat this density? I would like to see your math, in detail.

I would also like to see your printer. It seems unique.

My PHD allows for a tremendous amount of built in encryption (explained later).

I disagree. Sure, you can obfuscate data, but the ENCODING of the data does not play a significant role in encryption.

Plastics, composites, metals, and other plausible materials can last centuries as components of the PHD.

Please tell me about the ink technology you are using to ensure accurate color reproduction across all devices (with more than 3-4 bits of precision) AND last for 5000 years without any change to the color. I'm not sure this is possible. As far as I'm aware, the best quality, neutral PH archival ink can last 50 years without fading, but only if UV protected and handled carefully.

For something to last hundreds of years, it generally requires extraordinary climate controls and UV/light protection and extremely careful handling. Especially, when you're arguing that one ten millionth of a shade of color change alters the meaning of the data. Wow, that's some special ink you have developed. (You have the ink for this, right?)

The PHD can work with "off the shelf" technologies and some software encoding. Where most new technologies require extensive Research & Development costs the PHD can be launched in less than a year.

I REALLY want to see your "off the shelf" scanner than can reliably read better than 250Gbit per inch.

Can you give us a link to it?

Oh yeah, and your 500-year ink.

Also, the scanner that can reliably determine 30-bits of color depth (or even 8 bits) reliably.

Also, the printer that can be calibrated to within 1-bit of precision on a 24-bit scale.

Also, one that encodes color information without using pairs of CYMK dots (that only form deep colors because of human's limited retinal resolution).



In fact, would you please (assuming a 1000x1000 grid of dots with no borders, dividers and without variable spacing) demonstrate that your idea of "shapes" and "spacing" and such thing actually stores more information than a simple bitwise binary using the entire space.

Thanks.


FYI, I only react so strongly to this because

1) You have claimed to have filed a patent on it. I do not react well to frivolous patents and I think they are a bane of our society.

2) You make claims with a high degree of certainty in your language, but do not back them up with data or math.

3) Your username is arrogant as hell
 
Last edited:

sm625

Diamond Member
May 6, 2011
8,172
137
106
There is no way you could economically produce that many color variations. It would cost too much to encode/write the data. Imagine how much it would cost for a hamburger at Delucci's if Delucci offered 687 different kinds of pickles and 906 different kinds of onions, 374 types of bacon, 7095 types of cheeses, etc. It would cost at least $500 for a burger.

Reading it would be cheaper, but as was said it would still be difficult to accurately read the colors. The readings would be affected by ambient lighting conditions, reflections, etc.
 
Last edited:

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
There is no way you could economically produce that many color variations. It would cost too much to encode/write the data. Imagine how much it would cost for a hamburger at Delucci's if Delucci offered 687 different kinds of pickles and 906 different kinds of onions, 374 types of bacon, 7095 types of cheeses, etc. It would cost at least $500 for a burger.

Reading it would be cheaper, but as was said it would still be difficult to accurately read the colors. The readings would be affected by ambient lighting conditions, reflections, etc.

To be fair, a modern scanner (or a presumed "read head" in this technology) would use its own light and could avoid reflectiveness.

But the bit-depth of color recording is an accurate criticism. I would conceed maybe 4-6 bits of depth found in color, but not more, and not for a long period, as it would fade due to oxidation, unless kept in a vacuum, which pretty much rules out most printing technology and makes it complex and impractical.
 

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
Wait, why are you bringing up 3d printers? I thought this was about printing on flat plastic substrate?
 
Last edited:

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
I did some further research on this and I've moved beyond skeptical into completely certain that this is a silly adventure in someone's dream, but has no bounding in reality.

A reasonable maximum resolution of 1.44 million dots per square inch from available dye sublimation printer systems currently attainable in more than 4 colors doesn't afford sufficient density to beat even 10 year old hard drives. Using color schemes may increase the storage density, but an inkjet won't suffice for this, because it uses a halftone color pattern, therefore relying on lower resolution dye sublimation is the only thing I can think of. This won't work on most plastic substrates, but I'm sure one could be found that might work.

In order for this to be remotely feasible, the printer technology must be outlined. Otherwise, I believe it to be impossible.

Second, encoding schemes using "shapes" and "patterns" and "spacing", etc are not plausible, unless they represent a generalized revolution to the compression of data (beyond just via the use of colored ink).

Imagine this... Your paper/plastic substrate is a grid of potential points consisting of 1.44 million dots per square inch. If you can take these 1.44 million dots and arrange them in a manner that achieves a better compression ratio than simple binary encoding, then the same technique could be applied to a hard drive (which contains 250 billion dots per square inch). Even assuming there is technology to provide accurate and long-lasting shades of color, I doubt the capability of distinguishing this accurately beyond about 4-bits per color (16 shades) with any sort of portability or accuracy. This provides approximately 12-bits of depth, offering 17.3 million bits per inch. We're already way beyond the capability of most modern scanners, but hey, let's just make a wild assumption that this is possible.

Now, put in the really robust error checking you would need in this scheme, we have about 13 million bits per inch (assuming 1-4 ECC scheme).

It's beyond me how you claim to scale this by a factor of 150 from here.

Even ASSUMING you can read 8-bits per channel of color, that's about double.

And if you're claiming some sort of novel encryption scheme, using "shapes", you are going to have to demonstrate the mathematical foundation for it. If you do not, your patent is going to be rejected for being too general. After all, if you don't demonstrate your compression scheme in concrete ways, your encoding is EXACTLY the same as this "Rainbow Storage" that was proposed (and debunked) in 2006.

But, if you really do have some novel encryption scheme, I'm very interested in reading the paper about how it works. You should patent it (seriously), because it would be a big deal in the industry.

But you'll need at least state diagrams of how compression is determined. In order to sell it to a company, you'll need a detailed study of of compression density and statistical models of how various types of data input affect compression effectiveness.

I might refer you to a REAL compression technique here:

http://www.google.com/patents/US4464650

Good example of a reasonably well done patent.
 
Last edited:

TheEinstein

Member
Jan 12, 2009
38
0
0
I didn't read all of it, but this sounds an awful lot like: http://en.wikipedia.org/wiki/Rainbow_Storage


Incredible find!

I did search after search and never found that. I still cannot find the patent on it.

Fortunately there are large differences, he made his to be temp, I made mine to be long term, he made his requiring symbols (Shapes but of specific mandatory types) where mine does not require it (and where my shapes are allowed to be random looking in nature), and his color coding is very small compared to my desires.

Still I have tried reaching out to him, maybe he and me combined can make this a working master-piece!
 

TheEinstein

Member
Jan 12, 2009
38
0
0
There are a lot of wild assumptions here.

1) Color information cannot be reliably read back to 30 bits of precision. I would argue, if you want accuracy, you can only read to about 8-10 bits of precision.

We have advanced a long ways away from that

http://www.google.com/shopping/supp...=high+resolution+spectrometer&oq=spectrometer


2) If you want "shapes", you give up linear density. You can do maximum linear density OR shapes, but you can't do both.

Shapes can be made SMALLER than the color, aka Cones, Squares, Circles,and more. Or you can elevate/descend for mass shapes. Shapes is a wide spectrum of options.



[/quote]3) There are at least a dozen MASSIVE "claims" with no substantiation. With each one of these, I say "huh?" and/or "bullshit, prove it".

huh?

Prove it. Show us the math. I don't think you can get the kind of density you see with a modern giant-magnetoresistive magnetic drive or solid state flash arrays. A modern drive platter stores about 250Gbit per square inch. Print that with an inkjet. I dare you. [/quote]

Why do you want to make a Rainbow instead of a PHD? 3D printing is not injet. Your ignorance is showing.

Why waste density by putting borders between data cells? How does that make any sense? You cut your density in half to make shapes that can double your storage. How are you encoding this? Is it a hash table concept? What does the hash table mean?

A border is an option, not a necessity. It is plausible that on occasion you can create a file which is 'smaller' than your median size due to my techniques. If this is happening then a border will help trim size off. It is not a necessity but is discussed to show thoroughness.

You are trying to maximize density, leaving a "space" for an "outline" tells me that you're thinking of this as a big sheet of graph paper, not micro-scale encoding of information using 30um dots as you said in a previous paragraph.

The graph sheet reference is for the slower to understand, clearly I was not thinking to actually use graph paper! You are not getting some concepts, read it four times, and instead of being hostile ask nice questions. [/quote]

Please present the mathematical derivation of this 5% number. Made up numbers make your idea seem absurd.

Do you realize how many hand calculated shapes that would be? No you do not, or you would not be trying to ask that. I am a math theorist not a programmer, I cannot create a software modeler but I can go off some of the basic level testing I have done.


This is absurd. Presuming the every space is intended to write information (maximum density), then simply moving bits around (and leaving the empty space for them to move into) drastically reduces bit density without any gaining back the loss through your "dimensions" idea.

If the drive is hard-encoding then removing means different possible outcomes. If I make a 100 brick wall, and an artist wants 1 removed, there is 100 ways to do that. 2 removed becomes far more. This is also a way to encode data. I am terribly sorry but this seems to be a concept you cannot understand, and it is a way to encode data.

Second, 10x10x10 implies this is 3d (as in, a cube, not a square). a 10x10 box printed on plastic is 100 bits total. If you leave a "gutter" around the outside of this grid, you are actually using a 12x12 grid and only filling in the middle 10x10. Presumably, you can increase the data stored by 144 bits by changing "shapes" in these 12 columns.

There is some possible ways to gain by looking at it from a 3D viewpoint, I do that a lot, I also secured the Patent to cover that and other views.

You also presume against "PHYSICAL" Dimensions. If I have a room that is 10 feet wide, 10 feet long, and the shelves will store things for 5 feet high and the drives are variable in factual size...

Think on it. And then apologize.

But if you simply just use those 12 columns for bitwise data, you also get 144 bits, without the complexity of encoding hashes for identifying all these "shapes". But the idea isn't novel, either. There is a bit of research on using "shapes" for compression. I don't have any handy, but I think the mathematics of it indicates that using decent compression on the data itself results in the "shapes" being of marginal value to data compression.

The scope of patents I found using shapes is to IDENTIFY existing shapes, not to encode with shapes. The only exception is the Rainbow Storage Device http://it.toolbox.com/wiki/index.php/Rainbow_Storage





Why does removing bitwise information increase density somehow? Doesn't follow.
Yes you do not follow a lot.



Are you saying that if you have two gaps, you simply leave one blank and you have 3 possible levels of information encoded in those two spaces? "Trinary"? But if you simply used both spaces for binary data, you would encoded FOUR possible levels of information. So why do it?

I have 2 sets of spaces for 2 bits. In space 1 I use both bits, in the other space I use 1 bit. This is not 8 possible outcomes (3 bits), but instead it is 81 possible outcomes (6.3398~ bits). The relative theory here is simple if you can understand the math. A blank spot is a possible 3rd outcome, and of the 4 can be blank, it does not matter if all of them are.


As for the rest, you keep attacking, you clearly lack the binary based math skills to understand, and I will wait for a better reply.


And my nickname is cause I love math, math is awesome, and Einstein loved math.
 

TheEinstein

Member
Jan 12, 2009
38
0
0
Ok SecurityTheatre, you are a show... that's for sure.

your constant attacks show you have no clue of how this works, nor a care to know, you wish to just act like you have knowledge, when clearly you are severely lacking.

http://gizmodo.com/the-worlds-first-color-3d-printer-is-even-lovelier-tha-486199404
http://www.botobjects.com/specification
http://3dprinters.ws/

25 micron full color using 5 base colors. First one, specifications for it next. Then a list of the top ten 3D printers.

Color Spectrum stuff
http://en.wikipedia.org/wiki/Color
http://en.wikipedia.org/wiki/Visible_spectrum
 

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
Ok SecurityTheatre, you are a show... that's for sure.

your constant attacks show you have no clue of how this works, nor a care to know, you wish to just act like you have knowledge, when clearly you are severely lacking.

http://gizmodo.com/the-worlds-first-color-3d-printer-is-even-lovelier-tha-486199404
http://www.botobjects.com/specification
http://3dprinters.ws/

25 micron full color using 5 base colors. First one, specifications for it next. Then a list of the top ten 3D printers.

Color Spectrum stuff
http://en.wikipedia.org/wiki/Color
http://en.wikipedia.org/wiki/Visible_spectrum

Thanks for the encyclopedia entry on color... I had no idea! :whiste:

The BotObjects printer lays down a halftone pattern of dots. The fact that it can represent "full color" is the same way an inkjet does it. By halftone and relying on the low resolution of our eyes. At a micro-scale, it's just a bunch of single color dots. Look at the operation of the device. It lays down one color, then the print head moves off, purges, cleans and starts with another color. Repeat for 5 primary colors. It's an astoundling slow process. Still, this gives you 6 potential permutations (blank + 5 colors), which offers a bit over 2 bits of depth.
 
Last edited:

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
Listen, I'm sorry I was nasty in my first response, and I'm curious what you have to say.

Let me be clear, I have a background in engineering, math and computer system design, and your claims are pretty far-fetched.

But I want to say this.

The concept of "shapes" and "spacing" as a form of compression flies in the face of basic information theory.

Presuming that you have sufficient resolution to resolve a "shape", you also have a resolution to use that entire matrix of elements as binary storage elements.

By filling the entire matrix, the binary storage elements should contain as much or more information than the "shapes" that are comprised by the binary elements.

If the drive is hard-encoding then removing means different possible outcomes. If I make a 100 brick wall, and an artist wants 1 removed, there is 100 ways to do that. 2 removed becomes far more. This is also a way to encode data. I am terribly sorry but this seems to be a concept you cannot understand, and it is a way to encode data.

This is called binary. It's not compression. I do like your 100 brick analogy, lets run with this.

With 100 "bricks" (substitute "printed dots" or "RAM cells" or whatever), you have 2^100 possible representations. This is known as 100 bits of information.

You are correct that there are 100 ways to remove 1 bit, and 9900 ways to remove 2 bits and 970,200 ways to remove 3 bits. There are 2^100 ways to arrange this array of 100 bits. This still isn't compression. It's binary representation of 100 bits.... which results in 2^100 permutations. That is to say, there are 1,267,650,600,228,229,401,496,703,205,376 possible permutations of this 100x100 wall.

This is really elementary level information theory.

There exists exactly 2^100 ways to organize those bits.

There exists ONLY 2^100 ways to organize those bits.

Arranging them in a series of shapes doesn't change this.

There are exactly 100 bits of information.

Explain to me how "removing bits" or "making gaps" or "adding borders" changes this (presuming our "brick" is the smallest element possible to use)? Stick to the brick analogy. I understand your claim to increase bit depth by the use of color, but lets just stick to the bricks for now, for simplicity.

Convince me you're right, because I still don't buy it (neither does my information theory textbook).
 
Last edited:

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
Incredible find!

I did search after search and never found that. I still cannot find the patent on it.

Fortunately there are large differences, he made his to be temp, I made mine to be long term, he made his requiring symbols (Shapes but of specific mandatory types) where mine does not require it (and where my shapes are allowed to be random looking in nature), and his color coding is very small compared to my desires.

Still I have tried reaching out to him, maybe he and me combined can make this a working master-piece!

He eventually demonstrated something on the order of a few megabytes of storage per page. He made claims that in the future using "special technologies", you could make drives or use various kinds of substrates, including textures to increase storage density.

The claims he made (250GB per page) never materialized and I think he gave up his research due to the fact that he could only ever get a few MB per page with current technology.

The idea of printing data on paper (or other substrate), or even 3d-printing it isn't terribly novel. The idea of using colors to represent the data isn't either. Nor is using shapes. But the idea that it can exceed solid state and/or magnetic storage in density is extremely questionable, and the concept of using geometry for order of magnitude compression ratios is also dubious (convince me otherwise).

I'm a bit put off by your not having seen this. I remember it being on the front page of every technology article out there in 2006 for a few days, including slashdot, gizmodo, and a few others. I also found it about 2 minutes after I looked at your post by searching something like "storage using color printers" and I think it was the first result in Google.
 
Last edited:

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
3D printing of the drive entirely.

substrate... lol

Oxidization... lol

The primary cause for color fading is the oxidation of dye or ink.


And.... you have a substrate, even if you're using a dyed polymer extrusion 3d printer.

I'm not sure why this was so funny... They are relevant issues that are unaddressed, yet you make claims that seem to indicate you have a way around these issues.
 

TheEinstein

Member
Jan 12, 2009
38
0
0
I wont reply to all your messages, just going to give you one as a test

A magnetic hard-drive is made up of magnetic "Pits". An example of a possible drive section


. . . . . . . .
. . . . . . .
. . . . . . . .
. . . . . . .

Note there is 30 pits demonstrated here, and each pit can be either a 0 or a 1 magnetically. Aka 30 bits.

Now if you were to erase only 1 pit, and if you were sure that you can identify that a pit was missing where in fact it should be... you have 30 different ways to lay it out. For this example 'o' demonstrates an erased pit.

o . . . . . . .
. . . . . . .
. . . . . . . .
. . . . . . .



. o . . . . . .
. . . . . . .
. . . . . . . .
. . . . . . .



. . o . . . . .
. . . . . . .
. . . . . . . .
. . . . . . .



. . . o . . . .
. . . . . . .
. . . . . . . .
. . . . . . .



. . . . o . . .
. . . . . . .
. . . . . . . .
. . . . . . .



. . . . . o . .
. . . . . . .
. . . . . . . .
. . . . . . .



. . . . . . o .
. . . . . . .
. . . . . . . .
. . . . . . .



. . . . . . . o
. . . . . . .
. . . . . . . .
. . . . . . .



. . . . . . . .
o . . . . . .
. . . . . . . .
. . . . . . .



. . . . . . . .
. o . . . . .
. . . . . . . .
. . . . . . .


Since the empty spot can take up any of the 30 locations, we have 29 "pit based bits" and 30 other possible outcomes. This means 29 encodable bits and 4.90~ bits of hard-coding.


If you follow this now please say so and I will then answer 1 other issue. We will keep this up until I am assured that you are indeed keeping up, then I will then respond more freely.
 

TheEinstein

Member
Jan 12, 2009
38
0
0
and yes this is a 13% mere upgrade to the total bits. This is just one function and a not fully fleshed out version, I hope you can keep up.
 

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
I wont reply to all your messages, just going to give you one as a test



A magnetic hard-drive is made up of magnetic "Pits".

No it's not.

A modern PMR drive platter is made of a microscopically fine ferromagnetic granular powder that is laid down in a uniform distribution through chemical evaporation depositing and crystallization (I don't think they still use electroplating for this...). There is no means to detect whether or not there is a "gap" in this powder (the structures are too small, without using an electron microscope), even if it were possible to make a gap (which there really isn't).

But leaving this aside as just an over-simplified analogy, comes to addressing the actual encoding...

Now if you were to erase only 1 pit, and if you were sure that you can identify that a pit was missing where in fact it should be... you have 30 different ways to lay it out. For this example 'o' demonstrates an erased pit.

snip

Since the empty spot can take up any of the 30 locations, we have 29 "pit based bits" and 30 other possible outcomes. This means 29 encodable bits and 4.90~ bits of hard-coding.

If you follow this now please say so and I will then answer 1 other issue. We will keep this up until I am assured that you are indeed keeping up, then I will then respond more freely.

In this case, each bit has 3 potential values. 1 0 and "removed". This isn't structural, it's just trinary encoding instead of binary. Now you have 3^100 permutations. Neat. But I don't understand why you're backflipping to explain it in such complex terms, using phrases like "structure" and "shapes" and "gaps".


Of course, this also leads to the question, how are you encoding this in a physical medium? You are proposing 256 shades of color, plus a gap... "blank" (essentially your 257th color).

Claiming this extra depth is something novel like a "structural data encoding" scheme is probably the point of confusion. It doesn't have anything to do with "structure", but rather with just adding another value to your encoding scheme. But presuming you're printing this in 3d, what happens when you have a bit, surrounded on all sides by a "blank".

Perhaps you gather that this probably can't happen, so presumably this limitation on it being a "full" trinary implementation might lead to your calling it "structural".

If, however, its printed in 2d and using ink on paper (or any other 2d medium). You can encode a dot, and "blank" as a potential state for that dot. Again, just adding a single additional level to whatever other levels (colors, for example) are already proposed. In a 256 color scheme, this 'blank' value adds a 257th, and makes alignment and other similar things all that much more difficult (potentially making it marginal in utility).

Am I missing something?
 
Last edited:

_Rick_

Diamond Member
Apr 20, 2012
3,937
69
91
One of my key questions is how this is reasonably 3D anyway, as it's not possible to read color that is not on the surface of the structure.

Although you can use physical height in a position as part of the shape determination algorithm, this is merely another value on a 2D surface, and not a freely manipulable third dimension.
 

TheEinstein

Member
Jan 12, 2009
38
0
0
Wave of the hands... "substrate matters not, engineers for it is"

/yoda

Ok so my bad, I was using pits from cd roms.

give me a day to respond further,

however I think I can beat conventional drives. I just need to write a really good writeup for you clearly.

linjs, numbers, facts. I will leave out trivial aspects such as "substrates".... I mean seriouspy, substrates? Why not anchor it, or hang it, or glue it, or mock someone for needing a substrate when it is a nonsensical question left to engineers?

Big post incoming.
 

TheEinstein

Member
Jan 12, 2009
38
0
0
Ok time for that long post

One square inch is 645,160,000 square microns. This is 4.16231426+17

If I use 100 Micron squared sized colors I would have 41,623,142,600,000 color spots. This would be
119,666,534,975,000 bytes.


That comes out to a ratio of 108.8360795393 terabytes per square inch.

How does a 30 to 1 ratio of data storage sound?


Yeah dem apples are sour. And the long part is the numbers.
 

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
Ok time for that long post

One square inch is 645,160,000 square microns. This is 4.16231426+17

If I use 100 Micron squared sized colors I would have 41,623,142,600,000 color spots. This would be
119,666,534,975,000 bytes.

What?

645 million microns per square inch (this is accurate).... But remember, you're using structures 100 times bigger than 1 sq micron. How the hell are you getting 41 trillion dots per inch? :whiste:

In you are, indeed, printing dots that are 100 square microns in area (which is smaller than any practical process I've ever heard of, at 10 micron in diameter), you can fit exactly 6.4 million in a square inch.

Your ability to read this back with precision is questionable, but let's assume that you have the technology to both print and scan it. You still have 6.4 million dots at best, being generous, and not including ECC.

Where did you get terabytes?

..... and ....

This is not to mention that the average dye printer is making dots of 20-50 microns in DIAMETER (400-2,500 sq microns in area).

I think with practical current technology (assuming a VERY generous 20 micron diameter - 400 micron area), you get about 1.6 million dots per square inch. At 50 micron diameter, you're getting 260,000 dots per inch.

Checking our math, this jives well with the idea that 20 microns is a bit better than 1200x1200dpi. 1200x1200= 1.44 million dots per square inch.

Compare to current PMR drive head research that is breaking 1 terabit per square inch.

To be clear, I'm saying that storing data with printed representations is impossible, I'm saying that your density, provided an adequate level of error checking, is on the order of a few kilobytes (possibly a few megabytes with high-end technology) per square inch. This is still just above floppy disk territory (and probably with a higher failure and data loss rate).

Yeah dem apples are sour. And the long part is the numbers.



Arrogance makes your errors more ridiculous, and more annoying. Just FYI.
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
5,600
8,790
136
LoL, Security, you've spent way more time on this than what it deserved.

This part especially amused me:

I mean seriouspy, substrates? Why not anchor it, or hang it, or glue it, or mock someone for needing a substrate when it is a nonsensical question left to engineers?

On a slightly more serious note, if indeed you do have a patent submitted and can figure out a revolutionary new way of physical data storage, I hope we see the fruits of your efforts in the world soon. If not, well, then thanks for the amusing thread.
 

SecurityTheatre

Senior member
Aug 14, 2011
672
0
0
Wave of the hands... "substrate matters not, engineers for it is"

http://en.wikipedia.org/wiki/Substrate_(printing)

Being an engineer (although not a mechanical or chemical engineer, to be fair), I'm telling you that I believe many of your original claims (16 million colors of accurate reproduction at sub-micron sizes, fade-proof storage for hundreds of years, etc) are absurd.

That's all. Somehow, if you're going to make a claim, you have to be able to defend it. If you cannot, or you think the technology is for someone else to figure out, then don't make the claim.

If you want to argue for a method of encoding. That's great. Write it up like a method of encoding data using a physical representation. Don't make a bunch of absurd propositions about what shape the drive will be and how large it will be and how it will work mechanically and what the shelf life might be, etc, unless you have some evidence that this is even plausible.



linjs, numbers, facts. I will leave out trivial aspects such as "substrates".... I mean seriouspy, substrates? Why not anchor it, or hang it, or glue it, or mock someone for needing a substrate when it is a nonsensical question left to engineers?


The data storage medium (substrate) is one of two cores of your whole patent claim when a huge part of your claim is about printing density, color accuracy, fade resistance, etc.

Obviously, the other is the encoding (which we are talking about separately).
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |