Originally posted by: soccerballtux
Mind telling us your application? Sounds fun.
At first some handy tools to get to know the hardware and when i feel comfortable with the hardware : Robotics.
I plan on buying this development board to start : SAM7-P256 .
May not be the most modern ARM version today but it is a good start.
And it has DMA which comes in handy for datatransfers without putting a burden on the cpu.
Olimex
I can design pcb's circuits, electronic circuits, software and some basic mechanical devices.
And i will use those skills to design sensor circuitry with USB connections.
I just want to build some good sensors/ actuator controls and connect them with a big brain which would maybe be the eeebox. But i might as well end up with another x86 board. I am still hoping AMD wakes up and comes with a 5W version of the Athlon. I think that an Athlon XP Barton style cpu build on a modern process could easily be a 2,5 to 5 Watt cpu while being 4 times as fast as an Atom. But unfortunately AMD will not see the netbook market or the very low power/high performance x86 market. When Intel has a huge market share AMD will wake up.
When it coms to robotics i think the trick is in replicating the outside world in a compressed form inside a 3d world in memory/hardrive . Our brains do that in essence. Everything we know around as becomes an extention of our inner representation of our body.
And since a lot of people build 3d engines, what i would like to do is build a robotic device that is able to map the world around it by only using what is relevant.
Like for instance a set of rules that are basically questions ?
Like for example : "What is solid" ?
The idea is to create a lot of lookup tables where each table is a question. And each question is based on a sensor or a combination of sensors. What will happen is during mapping multiple sensors give readings and a structure is build up with lookuptable entries. Now the combination of these lookuptable entries are stored in the 3d representation. When you would then run the 3d engine in a simulation, it should be possible to get in the real world from point a to point b.
But this all depends on basic principles.
If i would ever have the chance i would like to build some highly parallel imaging camera where all the time seperate processors all have 1 simple task but essentially do that task all the time. Tasks like seek for a circle, seek for a rectangle, seek for a horizontall plane when comparing horizontal plane with data from a gyroscoop. Seek a triangle, seek a square.
Seek for a vertical plane compared with data from a gyroscoop. That is in essence what our brains do. We just programs our brains as a child to not only seek circles but also seek characters like 1 or S.
When we see something that seem solid we touch it to make sure. But if we would have sonar we could also use that sonar to verify what we see is a solid "thing". When you look at how children learn or think back how you did it when you where a child, a hot stove is not hot untill you touched it. The same goes for everything else we encounter as a child.
The tricky part is not building lookup tables or structures that map all this information. The tricky part is to how to combine all that data in the inner 3d world. That is for me a big question and i hope to be sparked with ideas when i have my advanced sensors and software capable of using and categorizing al the information from the sensors.
That is where i think that GPS receivers are fun, but overkill when for instance mapping a room. GPS is interesting whel crossing a large distance. But in essence in this case too you translate the place you want to be in a set of coördinates and vice versa.
This all afcourse is my opinion.