by Drac144 » Thu Mar 15, 2012 5:12 pm
Wow, MickRC3, you certainly had it easy. When I started programming in college in 1962, we had to break our jobs into sections because the computer was slow AND the MTBF (mean time between failure) of the hardware was much shorter than the time it took to run the job. So we would run each section of the job, in order. If a failure occurred in a section we could run that section again until it was completed - then repeat with the next section until the whole job was successfully completed.
A different system used Williams tube memory (basically a CRT) for temporary storage of data by a job. It had a whopping 1024 BITS of storage available.
And then there was the magnetic drum (an early forerunner to hard disk). It was a very large rotating drum coated with magnetic material and read heads staggered along the drum (in a spiral). It would be too slow to have your instructions (a single line of assembler level code) stored sequentially on the drum since by the time you executed one instruction, the head would have moved passed the next instruction space so you would have had to wait until the drum came all the way around to that spot again. Instead, your next instruction might be two or three (or more) words down the drum so the head would be just before it when you needed that instruction. The distance between “sequential” instructions would depend on how long each instruction took to execute - and each class of instructions had a different execution time. And if there was a branch possible, you had to have the options of the branch on a different read head. There were compilers to help with the spacing but a good programmer could do a better job.
Thanks for the trip down memory (pun intended) lane.