I'm taking an intro to computing class, and the way the class is taught is from the bottom-up. We've spent half the semester doing things like logic, device state, etc., and we've now moved on to machine-level programming (of a fictional computer) and soon to assembly. I had a question though. As I was doing a programming assignment, I was thinking about a paper I wrote on microprocessors about four years ago. Because I had some programming knowledge, I knew a bit about floating points and decimal operations, but I wasn't really clear on everything. Take the 486 SX/DX series; everything up to and including the 486SX didn't have floating point units. The 486DX did; yet, you could still do decimal operations on spreadsheets long before this time. My question is, since computers like the 486SX (and the LC2, our fictional computer, which only handles 2s comp integers) apparently didn't have any way to handle floating points, how did they execute decimal operations? Or am I just not clear onsomething else here?