Would a C compiler for the Apollo Guidance Computer be plausible?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
20
down vote

favorite
4












The Apollo Guidance Computer was used to control the command/service module and lunar module on the missions to the moon. (Definitely a retrocomputer!) As noted in this answer, programs were written in assembly language. There are several emulators available today, including one which can be run in a web browser.



Even though the AGC was invented before the C programming language, is a C compiler possible for this architecture? If not, why?



For the purposes of this question, a satisfactory compiler would support all of the C operators (including arithmetic, boolean, structure, and pointer) the original purpose of the AGC: notably, real-time signal processing and control. It does not have to be a lunar mission; the AGC was also used in a Navy rescue submarine and in the first airplane with computer fly-by-wire control.



Less important but nice to have:



  • Originally I included structure and pointer operations as a requirement. However, arrays with indices would probably suffice.

  • Ability to act as a general-purpose platform.

  • Compliance to one or more standards (including but not limited to K&R, ANSI, and Embedded C).

  • Floating point. The original software used fixed-point numbers, with subroutines for subtraction, multiplication, and division. Such numbers can be declared with Embedded C's fixed type. We'll call that good enough, even if it is possible to implement IEEE floating point.

  • Standard libraries or system calls (i.e. stdio should not be a concern).

The compiler would be hosted on another system, not on the AGC itself.



I hope these clarifications help!



(Photograph of Apollo Director of Software Engineering Margaret Hamilton, next to the source code of her team)
Margaret Hamilton







share|improve this question


















  • 2




    "A satisfactory compiler..." - for what purpose?
    – Bruce Abbott
    Sep 3 at 21:52






  • 1




    The Standard does explicitly recognize that an implementation might be freestanding, with no support for much of the standard library, rather than hosted, with full support for it.
    – Davislor
    Sep 3 at 23:04










  • It will depends heavily on the level of compliance, e.g. not sure if the standard library could fit in the memory.
    – user3528438
    Sep 3 at 23:21






  • 2




    This is weird. The guidance computer is much more powerful than a lot of processors I use today and for which there are a myriad of commercial C compilers - including floats and pointers in 512 words of code and 27 bytes of RAM.
    – pipe
    Sep 4 at 7:55






  • 1




    @PeterCordes: While the PIC10F series doesn't include crystal oscillator circuitry, it was not uncommon to run older PICs off a 32768Hz crystal, which would result in them executing 8192 instructions/second, while consuming very little power.
    – supercat
    Sep 5 at 4:29














up vote
20
down vote

favorite
4












The Apollo Guidance Computer was used to control the command/service module and lunar module on the missions to the moon. (Definitely a retrocomputer!) As noted in this answer, programs were written in assembly language. There are several emulators available today, including one which can be run in a web browser.



Even though the AGC was invented before the C programming language, is a C compiler possible for this architecture? If not, why?



For the purposes of this question, a satisfactory compiler would support all of the C operators (including arithmetic, boolean, structure, and pointer) the original purpose of the AGC: notably, real-time signal processing and control. It does not have to be a lunar mission; the AGC was also used in a Navy rescue submarine and in the first airplane with computer fly-by-wire control.



Less important but nice to have:



  • Originally I included structure and pointer operations as a requirement. However, arrays with indices would probably suffice.

  • Ability to act as a general-purpose platform.

  • Compliance to one or more standards (including but not limited to K&R, ANSI, and Embedded C).

  • Floating point. The original software used fixed-point numbers, with subroutines for subtraction, multiplication, and division. Such numbers can be declared with Embedded C's fixed type. We'll call that good enough, even if it is possible to implement IEEE floating point.

  • Standard libraries or system calls (i.e. stdio should not be a concern).

The compiler would be hosted on another system, not on the AGC itself.



I hope these clarifications help!



(Photograph of Apollo Director of Software Engineering Margaret Hamilton, next to the source code of her team)
Margaret Hamilton







share|improve this question


















  • 2




    "A satisfactory compiler..." - for what purpose?
    – Bruce Abbott
    Sep 3 at 21:52






  • 1




    The Standard does explicitly recognize that an implementation might be freestanding, with no support for much of the standard library, rather than hosted, with full support for it.
    – Davislor
    Sep 3 at 23:04










  • It will depends heavily on the level of compliance, e.g. not sure if the standard library could fit in the memory.
    – user3528438
    Sep 3 at 23:21






  • 2




    This is weird. The guidance computer is much more powerful than a lot of processors I use today and for which there are a myriad of commercial C compilers - including floats and pointers in 512 words of code and 27 bytes of RAM.
    – pipe
    Sep 4 at 7:55






  • 1




    @PeterCordes: While the PIC10F series doesn't include crystal oscillator circuitry, it was not uncommon to run older PICs off a 32768Hz crystal, which would result in them executing 8192 instructions/second, while consuming very little power.
    – supercat
    Sep 5 at 4:29












up vote
20
down vote

favorite
4









up vote
20
down vote

favorite
4






4





The Apollo Guidance Computer was used to control the command/service module and lunar module on the missions to the moon. (Definitely a retrocomputer!) As noted in this answer, programs were written in assembly language. There are several emulators available today, including one which can be run in a web browser.



Even though the AGC was invented before the C programming language, is a C compiler possible for this architecture? If not, why?



For the purposes of this question, a satisfactory compiler would support all of the C operators (including arithmetic, boolean, structure, and pointer) the original purpose of the AGC: notably, real-time signal processing and control. It does not have to be a lunar mission; the AGC was also used in a Navy rescue submarine and in the first airplane with computer fly-by-wire control.



Less important but nice to have:



  • Originally I included structure and pointer operations as a requirement. However, arrays with indices would probably suffice.

  • Ability to act as a general-purpose platform.

  • Compliance to one or more standards (including but not limited to K&R, ANSI, and Embedded C).

  • Floating point. The original software used fixed-point numbers, with subroutines for subtraction, multiplication, and division. Such numbers can be declared with Embedded C's fixed type. We'll call that good enough, even if it is possible to implement IEEE floating point.

  • Standard libraries or system calls (i.e. stdio should not be a concern).

The compiler would be hosted on another system, not on the AGC itself.



I hope these clarifications help!



(Photograph of Apollo Director of Software Engineering Margaret Hamilton, next to the source code of her team)
Margaret Hamilton







share|improve this question














The Apollo Guidance Computer was used to control the command/service module and lunar module on the missions to the moon. (Definitely a retrocomputer!) As noted in this answer, programs were written in assembly language. There are several emulators available today, including one which can be run in a web browser.



Even though the AGC was invented before the C programming language, is a C compiler possible for this architecture? If not, why?



For the purposes of this question, a satisfactory compiler would support all of the C operators (including arithmetic, boolean, structure, and pointer) the original purpose of the AGC: notably, real-time signal processing and control. It does not have to be a lunar mission; the AGC was also used in a Navy rescue submarine and in the first airplane with computer fly-by-wire control.



Less important but nice to have:



  • Originally I included structure and pointer operations as a requirement. However, arrays with indices would probably suffice.

  • Ability to act as a general-purpose platform.

  • Compliance to one or more standards (including but not limited to K&R, ANSI, and Embedded C).

  • Floating point. The original software used fixed-point numbers, with subroutines for subtraction, multiplication, and division. Such numbers can be declared with Embedded C's fixed type. We'll call that good enough, even if it is possible to implement IEEE floating point.

  • Standard libraries or system calls (i.e. stdio should not be a concern).

The compiler would be hosted on another system, not on the AGC itself.



I hope these clarifications help!



(Photograph of Apollo Director of Software Engineering Margaret Hamilton, next to the source code of her team)
Margaret Hamilton









share|improve this question













share|improve this question




share|improve this question








edited Sep 4 at 1:39

























asked Sep 3 at 16:20









Dr Sheldon

263110




263110







  • 2




    "A satisfactory compiler..." - for what purpose?
    – Bruce Abbott
    Sep 3 at 21:52






  • 1




    The Standard does explicitly recognize that an implementation might be freestanding, with no support for much of the standard library, rather than hosted, with full support for it.
    – Davislor
    Sep 3 at 23:04










  • It will depends heavily on the level of compliance, e.g. not sure if the standard library could fit in the memory.
    – user3528438
    Sep 3 at 23:21






  • 2




    This is weird. The guidance computer is much more powerful than a lot of processors I use today and for which there are a myriad of commercial C compilers - including floats and pointers in 512 words of code and 27 bytes of RAM.
    – pipe
    Sep 4 at 7:55






  • 1




    @PeterCordes: While the PIC10F series doesn't include crystal oscillator circuitry, it was not uncommon to run older PICs off a 32768Hz crystal, which would result in them executing 8192 instructions/second, while consuming very little power.
    – supercat
    Sep 5 at 4:29












  • 2




    "A satisfactory compiler..." - for what purpose?
    – Bruce Abbott
    Sep 3 at 21:52






  • 1




    The Standard does explicitly recognize that an implementation might be freestanding, with no support for much of the standard library, rather than hosted, with full support for it.
    – Davislor
    Sep 3 at 23:04










  • It will depends heavily on the level of compliance, e.g. not sure if the standard library could fit in the memory.
    – user3528438
    Sep 3 at 23:21






  • 2




    This is weird. The guidance computer is much more powerful than a lot of processors I use today and for which there are a myriad of commercial C compilers - including floats and pointers in 512 words of code and 27 bytes of RAM.
    – pipe
    Sep 4 at 7:55






  • 1




    @PeterCordes: While the PIC10F series doesn't include crystal oscillator circuitry, it was not uncommon to run older PICs off a 32768Hz crystal, which would result in them executing 8192 instructions/second, while consuming very little power.
    – supercat
    Sep 5 at 4:29







2




2




"A satisfactory compiler..." - for what purpose?
– Bruce Abbott
Sep 3 at 21:52




"A satisfactory compiler..." - for what purpose?
– Bruce Abbott
Sep 3 at 21:52




1




1




The Standard does explicitly recognize that an implementation might be freestanding, with no support for much of the standard library, rather than hosted, with full support for it.
– Davislor
Sep 3 at 23:04




The Standard does explicitly recognize that an implementation might be freestanding, with no support for much of the standard library, rather than hosted, with full support for it.
– Davislor
Sep 3 at 23:04












It will depends heavily on the level of compliance, e.g. not sure if the standard library could fit in the memory.
– user3528438
Sep 3 at 23:21




It will depends heavily on the level of compliance, e.g. not sure if the standard library could fit in the memory.
– user3528438
Sep 3 at 23:21




2




2




This is weird. The guidance computer is much more powerful than a lot of processors I use today and for which there are a myriad of commercial C compilers - including floats and pointers in 512 words of code and 27 bytes of RAM.
– pipe
Sep 4 at 7:55




This is weird. The guidance computer is much more powerful than a lot of processors I use today and for which there are a myriad of commercial C compilers - including floats and pointers in 512 words of code and 27 bytes of RAM.
– pipe
Sep 4 at 7:55




1




1




@PeterCordes: While the PIC10F series doesn't include crystal oscillator circuitry, it was not uncommon to run older PICs off a 32768Hz crystal, which would result in them executing 8192 instructions/second, while consuming very little power.
– supercat
Sep 5 at 4:29




@PeterCordes: While the PIC10F series doesn't include crystal oscillator circuitry, it was not uncommon to run older PICs off a 32768Hz crystal, which would result in them executing 8192 instructions/second, while consuming very little power.
– supercat
Sep 5 at 4:29










3 Answers
3






active

oldest

votes

















up vote
3
down vote



accepted










One of the biggest problems with C for this architecture is the fragmented address space.  You would almost want some extensions for C that direct the compiler where to locate (global) data so that the various data would be accessible in an easy and known way from the code that uses it.  Somewhat reminiscent of FORTRAN Common Blocks...



Consider for a minute the 8086 extended, 20-bit addressing.  Compilers for that architecture had to choose a memory layout model for program execution.   There are basically three options:



  • Stick with 16-bit pointers — and forgo the larger memory for the program (i.e. everything fits in 64k), leaving that additional address space for running multiple programs (rather than for running larger programs).


  • Use 32-bit pointers to store 20-bit addresses — that means that every pointer dereference or array indexing operation required multiple instructions, involving swapping of segment registers and the like.  So, a simple *p++ = *q++; becomes a dozen or more instructions, whereas it is ideally a single instruction.


  • Use 16-bit spaces for each of code, global data, stack, and heap.  Thus programs of 256k are possible with 64k of each of the above.  This was a reasonable option for Pascal due to being a less pointer-oriented (by having reference parameters, for example), but not as much for C, which is much more pointer happy (e.g. using pointers instead of reference parameters).


Architectures with paged memory banks using segment-specifying registers are surprisingly easy to program by human in assembly but hard to work by a compiler.  These architectures typically use a common base page, perfect for some of the globals, but easy to overflow if you put all the globals there.  So, again, you would almost want some location directives in C to inform the compiler that these globals should go in the coveted base page, vs. elsewhere.



Apparently the AGC has two levels of memory segments, the second due to the expansion by the Block II (via the SBank/SuperBank bit).  These things tend to wreak havoc with models of code generation and C's expectation that a universal full-address-space-sized pointer can refer to anything: code, data, stack, heap...



That's not to say it couldn't be done, but you'd want a number of language extensions, or else you'd find it extremely difficult to reach the efficiency of hand-written assembly.






share|improve this answer




















  • It's been a while since I used it, but I'm convinced one of the MSDOS C compilers I worked with had a way of annotating variables to put them into a common block. Most modern C compilers provide a way of specifying the object file section to put an object in; once you've done that a linker script can be used to place the sections in appropriate locations.
    – Jules
    Sep 6 at 8:57










  • There were two different ways of using 32-bit pointers: far and huge. A far pointer would be limited to accessing an object of 65,520 bytes or less anywhere in memory, but code to access far pointers could be reanably efficient. Given int *p,*q;, *p++ += *q++; would generate something like les bx,[bp+12] / add word [bp+12], 2mov ax,[es:bx] / les bx,[bp+8] / add word [bp+8],2 / add [es:bx],ax. Six instructions total. Compared with using near pointers, this requires using les instead of mov...
    – supercat
    Sep 6 at 17:00











  • ...and requires adding an es: prefix for the operations that use the pointers. A huge pointer could handle individual objects up to the full size of memory, but almost any address computation would either require a subroutine call or bloat the code enormously. Something really simple like p++ would become something like add byte[bp+8],2 / cmp byte [bp+8],17 / jb ok / inc word [bp+10] / sub byte [bp+8],16 / ok: [which might be practical without a subroutine call] but the best code for something like p+=someLong would probably be something like...
    – supercat
    Sep 6 at 17:07










  • uint24_t temp = p_ofs <<4 + p_seg<<8; temp += someLong<<4; p_ofs = (temp & 240) >> 4; p_seg = temp>>8; Too painful for words. Even p+=someInt isn't all that much better. I think most complaints about the 8086 come from people who don't understand how to use far pointers effectively, since the code for them is more efficient than for any other 16-bit processor that needs to access 65,520 byte objects at arbitrary locations in RAM.
    – supercat
    Sep 6 at 17:26

















up vote
20
down vote













A full conforming compiler would be impractical, but it would probably be possible to write a compiler for a subset of the language which a couple of features removed:



  1. While it would be possible for a compiler to emulate recursion, code that needs to support re-entrancy would likely be much less efficient that code which doesn't. Given that the Standard imposes no requirement that compilers support recursion usefully (there's no guarantee that it be possible to nest any particular function more than one deep without bombing the stack) simply refusing to support recursion would seem more practical than generating re-entrant code for functions, and more "honest" than accepting such code but behaving in goofy fashion if functions are invoked recursively.


  2. The Standard would require that an implementation support floating-point math on values with a mantissa of greater than 32 bits. Limiting floating-point computations to a 32-bit or even 16-bit mantissa would allow them to be handled with smaller and faster code than would be possible with a standard-conforming "double".


Usable C compilers exist for microprocessors whose architecture is even less "C-friendly" such as the CDP1802 (interesting chip, but the code to accomplish something like "ptr[index] = 1234;" would take 21 instructions) so the Apollo computer, which has an INDEX instruction, doesn't look too bad by comparison, at least if code doesn't need to support re-entrant functions.






share|improve this answer




















  • Wasn't there a recognized subset of C without floating point, back in the day? (A quick web search didn't find it - but I can't remember exactly what it was called).
    – alephzero
    Sep 3 at 19:49






  • 17




    The C standard actually doesn't set any limits to floating point precision. It just says double needs to be at least as precise as float. It's just a convention that most ompulers use IEEE754 for floating point, and that says "float is 32 bits" (both mantissa and exponent).
    – tofro
    Sep 3 at 20:10






  • 6




    There's a few C compilers for microcontrollers that implement software floating point as a library and allow you to not link the floating point library if your program only use integers.
    – slebetman
    Sep 3 at 23:15






  • 3




    @slebetman: That was also the case for early PC C compilers (e.g., Microsoft QuickC and Borland Turbo C) before the x87 FPU became a standard feature.
    – dan04
    Sep 4 at 4:33






  • 1




    @dan04: Same for the Amiga / MC68851, and I guess lots of other platforms.
    – DevSolar
    Sep 4 at 7:34

















up vote
8
down vote














Even though the AGC was invented before the C programming language, is a C compiler possible for this architecture?




To begin with it depends on the value of for :))



  • If the question is about that a compiler can be writen (on some computer) to produce code for the AGS (aka a crosscompiler), the answer is a clear Yes.


  • If it asks for a compiler running on the AGS it gets harder. Not so much for having a compiler in it's 30 kWords of program ROM, but for not so theoretical a way of inputing a source to be compiled. Here I would go for a theoretical yes, but, by all love for low level interfaces, in praxis the answer is No.



For the purposes of this question, a satisfactory compiler would support all of the C operators (including arithmetic, boolean, structure, and pointer). It would not need to support all of the standard libraries or system calls (i.e. stdio should not be a concern).




"Satisfactory" is a nice word - just not very clear. Is it satisfactionary that it for example the only data type available is a 15 bit word, or is floating point mandatory? Does it only need to follow basic K&R, or is C99 (or C11) a goal?




The compiler would be hosted on another system, not on the AGC itself.




Sounds good. So the answer is a clear Yes.



It is possible to do a C compiler (even on contemporary computer to the AGC) following K&R, even including FP, whose output can be loaded (well, wired) into the AGC. For doing FP it might have to carry a considerable large runtime (library), so C without FP and only a 15 bit integer datatype might be the prefered solution to keep as much as possible ROM for useful code.



And then there is the question asked in the title (emphasis by me) which somewhat got lost in the question text:




Would a C compiler for the Apollo Guidance Computer be plausible?




Here the answer is a clear No.



The result of a C compiler will alway be inferiour to what an Assembler programmer can squeeze out. Considering the small code space (36 kWords) and complex job to handle, Assembly might have been the only way to go.



If at all, a language more suited to system programming than C would have been used - most likely something similar to MOL-360 or PL/1 (or rather its specialized cousin PL/S).






share|improve this answer


















  • 5




    "a C compiler will alway be inferiour...": Note that optimizers can easily outsmart humans, but you are correct that humans can make assumptions that optimizers are not allowed to. On a general purpose computer, I'd say a C compiler will usually be superior to hand crafted assembly, and that's ignoring all the advantages of a higher level language over assembly (readability, etc.)
    – isanae
    Sep 4 at 1:17






  • 4




    @isanae As I see it, if a compiler outsmarts a human Assembly programmer, then he wasn'tvery smart to start with. Beside not seeing many advantages of high level languages, I still haven't seen in ~40 years of programing a single compiler produceing superiour code to what a human programmer can do (not at least due the fact that all their tricks have been devised by human programmers in the first place). Compilers only may bring advantage compared to less than good programers. But then again, these are the same using less than fiting algorithms - something no compiler can't put right again.
    – Raffzahn
    Sep 4 at 1:32






  • 6




    Ah yes, the famed Assembly Programmer. If you cannot see many advantages to high level languages compared to assembly, then we have such a fundamental disagreement and exceptionally different professional experience that I don't think we'll ever agree on who's the best optimizer ;)
    – isanae
    Sep 4 at 1:49






  • 8




    @isanae: You happen to have picked a really bad architecture to try to make your point with. The statement about the compiler output beating a decent assembly programmer wasn't really true until after the rearranging processors came out. On a fixed-order fixed-execution-cycle processor the assembly programmer tends to win.
    – Joshua
    Sep 4 at 1:58






  • 3




    @isanae: Current compilers (gcc and clang) are still terrible for ARM SIMD intrinsics (which is weird because they're very good with x86 SIMD intrinsics). Writing manually-vectorized loops in asm by hand is definitely still recommended for ARM (but not x86 or PowerPC). I totally agree with you that writing whole programs in asm is not sensible these days, but knowing asm, and how your C or C++ will compile to asm, is useful to keep in mind. (I basically can't stop myself from thinking in terms of asm even if I wanted to, except by writing in bash or perl instead of C/C++.)
    – Peter Cordes
    Sep 4 at 8:52











Your Answer







StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "648"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: false,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













 

draft saved


draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7466%2fwould-a-c-compiler-for-the-apollo-guidance-computer-be-plausible%23new-answer', 'question_page');

);

Post as a guest






























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
3
down vote



accepted










One of the biggest problems with C for this architecture is the fragmented address space.  You would almost want some extensions for C that direct the compiler where to locate (global) data so that the various data would be accessible in an easy and known way from the code that uses it.  Somewhat reminiscent of FORTRAN Common Blocks...



Consider for a minute the 8086 extended, 20-bit addressing.  Compilers for that architecture had to choose a memory layout model for program execution.   There are basically three options:



  • Stick with 16-bit pointers — and forgo the larger memory for the program (i.e. everything fits in 64k), leaving that additional address space for running multiple programs (rather than for running larger programs).


  • Use 32-bit pointers to store 20-bit addresses — that means that every pointer dereference or array indexing operation required multiple instructions, involving swapping of segment registers and the like.  So, a simple *p++ = *q++; becomes a dozen or more instructions, whereas it is ideally a single instruction.


  • Use 16-bit spaces for each of code, global data, stack, and heap.  Thus programs of 256k are possible with 64k of each of the above.  This was a reasonable option for Pascal due to being a less pointer-oriented (by having reference parameters, for example), but not as much for C, which is much more pointer happy (e.g. using pointers instead of reference parameters).


Architectures with paged memory banks using segment-specifying registers are surprisingly easy to program by human in assembly but hard to work by a compiler.  These architectures typically use a common base page, perfect for some of the globals, but easy to overflow if you put all the globals there.  So, again, you would almost want some location directives in C to inform the compiler that these globals should go in the coveted base page, vs. elsewhere.



Apparently the AGC has two levels of memory segments, the second due to the expansion by the Block II (via the SBank/SuperBank bit).  These things tend to wreak havoc with models of code generation and C's expectation that a universal full-address-space-sized pointer can refer to anything: code, data, stack, heap...



That's not to say it couldn't be done, but you'd want a number of language extensions, or else you'd find it extremely difficult to reach the efficiency of hand-written assembly.






share|improve this answer




















  • It's been a while since I used it, but I'm convinced one of the MSDOS C compilers I worked with had a way of annotating variables to put them into a common block. Most modern C compilers provide a way of specifying the object file section to put an object in; once you've done that a linker script can be used to place the sections in appropriate locations.
    – Jules
    Sep 6 at 8:57










  • There were two different ways of using 32-bit pointers: far and huge. A far pointer would be limited to accessing an object of 65,520 bytes or less anywhere in memory, but code to access far pointers could be reanably efficient. Given int *p,*q;, *p++ += *q++; would generate something like les bx,[bp+12] / add word [bp+12], 2mov ax,[es:bx] / les bx,[bp+8] / add word [bp+8],2 / add [es:bx],ax. Six instructions total. Compared with using near pointers, this requires using les instead of mov...
    – supercat
    Sep 6 at 17:00











  • ...and requires adding an es: prefix for the operations that use the pointers. A huge pointer could handle individual objects up to the full size of memory, but almost any address computation would either require a subroutine call or bloat the code enormously. Something really simple like p++ would become something like add byte[bp+8],2 / cmp byte [bp+8],17 / jb ok / inc word [bp+10] / sub byte [bp+8],16 / ok: [which might be practical without a subroutine call] but the best code for something like p+=someLong would probably be something like...
    – supercat
    Sep 6 at 17:07










  • uint24_t temp = p_ofs <<4 + p_seg<<8; temp += someLong<<4; p_ofs = (temp & 240) >> 4; p_seg = temp>>8; Too painful for words. Even p+=someInt isn't all that much better. I think most complaints about the 8086 come from people who don't understand how to use far pointers effectively, since the code for them is more efficient than for any other 16-bit processor that needs to access 65,520 byte objects at arbitrary locations in RAM.
    – supercat
    Sep 6 at 17:26














up vote
3
down vote



accepted










One of the biggest problems with C for this architecture is the fragmented address space.  You would almost want some extensions for C that direct the compiler where to locate (global) data so that the various data would be accessible in an easy and known way from the code that uses it.  Somewhat reminiscent of FORTRAN Common Blocks...



Consider for a minute the 8086 extended, 20-bit addressing.  Compilers for that architecture had to choose a memory layout model for program execution.   There are basically three options:



  • Stick with 16-bit pointers — and forgo the larger memory for the program (i.e. everything fits in 64k), leaving that additional address space for running multiple programs (rather than for running larger programs).


  • Use 32-bit pointers to store 20-bit addresses — that means that every pointer dereference or array indexing operation required multiple instructions, involving swapping of segment registers and the like.  So, a simple *p++ = *q++; becomes a dozen or more instructions, whereas it is ideally a single instruction.


  • Use 16-bit spaces for each of code, global data, stack, and heap.  Thus programs of 256k are possible with 64k of each of the above.  This was a reasonable option for Pascal due to being a less pointer-oriented (by having reference parameters, for example), but not as much for C, which is much more pointer happy (e.g. using pointers instead of reference parameters).


Architectures with paged memory banks using segment-specifying registers are surprisingly easy to program by human in assembly but hard to work by a compiler.  These architectures typically use a common base page, perfect for some of the globals, but easy to overflow if you put all the globals there.  So, again, you would almost want some location directives in C to inform the compiler that these globals should go in the coveted base page, vs. elsewhere.



Apparently the AGC has two levels of memory segments, the second due to the expansion by the Block II (via the SBank/SuperBank bit).  These things tend to wreak havoc with models of code generation and C's expectation that a universal full-address-space-sized pointer can refer to anything: code, data, stack, heap...



That's not to say it couldn't be done, but you'd want a number of language extensions, or else you'd find it extremely difficult to reach the efficiency of hand-written assembly.






share|improve this answer




















  • It's been a while since I used it, but I'm convinced one of the MSDOS C compilers I worked with had a way of annotating variables to put them into a common block. Most modern C compilers provide a way of specifying the object file section to put an object in; once you've done that a linker script can be used to place the sections in appropriate locations.
    – Jules
    Sep 6 at 8:57










  • There were two different ways of using 32-bit pointers: far and huge. A far pointer would be limited to accessing an object of 65,520 bytes or less anywhere in memory, but code to access far pointers could be reanably efficient. Given int *p,*q;, *p++ += *q++; would generate something like les bx,[bp+12] / add word [bp+12], 2mov ax,[es:bx] / les bx,[bp+8] / add word [bp+8],2 / add [es:bx],ax. Six instructions total. Compared with using near pointers, this requires using les instead of mov...
    – supercat
    Sep 6 at 17:00











  • ...and requires adding an es: prefix for the operations that use the pointers. A huge pointer could handle individual objects up to the full size of memory, but almost any address computation would either require a subroutine call or bloat the code enormously. Something really simple like p++ would become something like add byte[bp+8],2 / cmp byte [bp+8],17 / jb ok / inc word [bp+10] / sub byte [bp+8],16 / ok: [which might be practical without a subroutine call] but the best code for something like p+=someLong would probably be something like...
    – supercat
    Sep 6 at 17:07










  • uint24_t temp = p_ofs <<4 + p_seg<<8; temp += someLong<<4; p_ofs = (temp & 240) >> 4; p_seg = temp>>8; Too painful for words. Even p+=someInt isn't all that much better. I think most complaints about the 8086 come from people who don't understand how to use far pointers effectively, since the code for them is more efficient than for any other 16-bit processor that needs to access 65,520 byte objects at arbitrary locations in RAM.
    – supercat
    Sep 6 at 17:26












up vote
3
down vote



accepted







up vote
3
down vote



accepted






One of the biggest problems with C for this architecture is the fragmented address space.  You would almost want some extensions for C that direct the compiler where to locate (global) data so that the various data would be accessible in an easy and known way from the code that uses it.  Somewhat reminiscent of FORTRAN Common Blocks...



Consider for a minute the 8086 extended, 20-bit addressing.  Compilers for that architecture had to choose a memory layout model for program execution.   There are basically three options:



  • Stick with 16-bit pointers — and forgo the larger memory for the program (i.e. everything fits in 64k), leaving that additional address space for running multiple programs (rather than for running larger programs).


  • Use 32-bit pointers to store 20-bit addresses — that means that every pointer dereference or array indexing operation required multiple instructions, involving swapping of segment registers and the like.  So, a simple *p++ = *q++; becomes a dozen or more instructions, whereas it is ideally a single instruction.


  • Use 16-bit spaces for each of code, global data, stack, and heap.  Thus programs of 256k are possible with 64k of each of the above.  This was a reasonable option for Pascal due to being a less pointer-oriented (by having reference parameters, for example), but not as much for C, which is much more pointer happy (e.g. using pointers instead of reference parameters).


Architectures with paged memory banks using segment-specifying registers are surprisingly easy to program by human in assembly but hard to work by a compiler.  These architectures typically use a common base page, perfect for some of the globals, but easy to overflow if you put all the globals there.  So, again, you would almost want some location directives in C to inform the compiler that these globals should go in the coveted base page, vs. elsewhere.



Apparently the AGC has two levels of memory segments, the second due to the expansion by the Block II (via the SBank/SuperBank bit).  These things tend to wreak havoc with models of code generation and C's expectation that a universal full-address-space-sized pointer can refer to anything: code, data, stack, heap...



That's not to say it couldn't be done, but you'd want a number of language extensions, or else you'd find it extremely difficult to reach the efficiency of hand-written assembly.






share|improve this answer












One of the biggest problems with C for this architecture is the fragmented address space.  You would almost want some extensions for C that direct the compiler where to locate (global) data so that the various data would be accessible in an easy and known way from the code that uses it.  Somewhat reminiscent of FORTRAN Common Blocks...



Consider for a minute the 8086 extended, 20-bit addressing.  Compilers for that architecture had to choose a memory layout model for program execution.   There are basically three options:



  • Stick with 16-bit pointers — and forgo the larger memory for the program (i.e. everything fits in 64k), leaving that additional address space for running multiple programs (rather than for running larger programs).


  • Use 32-bit pointers to store 20-bit addresses — that means that every pointer dereference or array indexing operation required multiple instructions, involving swapping of segment registers and the like.  So, a simple *p++ = *q++; becomes a dozen or more instructions, whereas it is ideally a single instruction.


  • Use 16-bit spaces for each of code, global data, stack, and heap.  Thus programs of 256k are possible with 64k of each of the above.  This was a reasonable option for Pascal due to being a less pointer-oriented (by having reference parameters, for example), but not as much for C, which is much more pointer happy (e.g. using pointers instead of reference parameters).


Architectures with paged memory banks using segment-specifying registers are surprisingly easy to program by human in assembly but hard to work by a compiler.  These architectures typically use a common base page, perfect for some of the globals, but easy to overflow if you put all the globals there.  So, again, you would almost want some location directives in C to inform the compiler that these globals should go in the coveted base page, vs. elsewhere.



Apparently the AGC has two levels of memory segments, the second due to the expansion by the Block II (via the SBank/SuperBank bit).  These things tend to wreak havoc with models of code generation and C's expectation that a universal full-address-space-sized pointer can refer to anything: code, data, stack, heap...



That's not to say it couldn't be done, but you'd want a number of language extensions, or else you'd find it extremely difficult to reach the efficiency of hand-written assembly.







share|improve this answer












share|improve this answer



share|improve this answer










answered Sep 5 at 0:31









Erik Eidt

682310




682310











  • It's been a while since I used it, but I'm convinced one of the MSDOS C compilers I worked with had a way of annotating variables to put them into a common block. Most modern C compilers provide a way of specifying the object file section to put an object in; once you've done that a linker script can be used to place the sections in appropriate locations.
    – Jules
    Sep 6 at 8:57










  • There were two different ways of using 32-bit pointers: far and huge. A far pointer would be limited to accessing an object of 65,520 bytes or less anywhere in memory, but code to access far pointers could be reanably efficient. Given int *p,*q;, *p++ += *q++; would generate something like les bx,[bp+12] / add word [bp+12], 2mov ax,[es:bx] / les bx,[bp+8] / add word [bp+8],2 / add [es:bx],ax. Six instructions total. Compared with using near pointers, this requires using les instead of mov...
    – supercat
    Sep 6 at 17:00











  • ...and requires adding an es: prefix for the operations that use the pointers. A huge pointer could handle individual objects up to the full size of memory, but almost any address computation would either require a subroutine call or bloat the code enormously. Something really simple like p++ would become something like add byte[bp+8],2 / cmp byte [bp+8],17 / jb ok / inc word [bp+10] / sub byte [bp+8],16 / ok: [which might be practical without a subroutine call] but the best code for something like p+=someLong would probably be something like...
    – supercat
    Sep 6 at 17:07










  • uint24_t temp = p_ofs <<4 + p_seg<<8; temp += someLong<<4; p_ofs = (temp & 240) >> 4; p_seg = temp>>8; Too painful for words. Even p+=someInt isn't all that much better. I think most complaints about the 8086 come from people who don't understand how to use far pointers effectively, since the code for them is more efficient than for any other 16-bit processor that needs to access 65,520 byte objects at arbitrary locations in RAM.
    – supercat
    Sep 6 at 17:26
















  • It's been a while since I used it, but I'm convinced one of the MSDOS C compilers I worked with had a way of annotating variables to put them into a common block. Most modern C compilers provide a way of specifying the object file section to put an object in; once you've done that a linker script can be used to place the sections in appropriate locations.
    – Jules
    Sep 6 at 8:57










  • There were two different ways of using 32-bit pointers: far and huge. A far pointer would be limited to accessing an object of 65,520 bytes or less anywhere in memory, but code to access far pointers could be reanably efficient. Given int *p,*q;, *p++ += *q++; would generate something like les bx,[bp+12] / add word [bp+12], 2mov ax,[es:bx] / les bx,[bp+8] / add word [bp+8],2 / add [es:bx],ax. Six instructions total. Compared with using near pointers, this requires using les instead of mov...
    – supercat
    Sep 6 at 17:00











  • ...and requires adding an es: prefix for the operations that use the pointers. A huge pointer could handle individual objects up to the full size of memory, but almost any address computation would either require a subroutine call or bloat the code enormously. Something really simple like p++ would become something like add byte[bp+8],2 / cmp byte [bp+8],17 / jb ok / inc word [bp+10] / sub byte [bp+8],16 / ok: [which might be practical without a subroutine call] but the best code for something like p+=someLong would probably be something like...
    – supercat
    Sep 6 at 17:07










  • uint24_t temp = p_ofs <<4 + p_seg<<8; temp += someLong<<4; p_ofs = (temp & 240) >> 4; p_seg = temp>>8; Too painful for words. Even p+=someInt isn't all that much better. I think most complaints about the 8086 come from people who don't understand how to use far pointers effectively, since the code for them is more efficient than for any other 16-bit processor that needs to access 65,520 byte objects at arbitrary locations in RAM.
    – supercat
    Sep 6 at 17:26















It's been a while since I used it, but I'm convinced one of the MSDOS C compilers I worked with had a way of annotating variables to put them into a common block. Most modern C compilers provide a way of specifying the object file section to put an object in; once you've done that a linker script can be used to place the sections in appropriate locations.
– Jules
Sep 6 at 8:57




It's been a while since I used it, but I'm convinced one of the MSDOS C compilers I worked with had a way of annotating variables to put them into a common block. Most modern C compilers provide a way of specifying the object file section to put an object in; once you've done that a linker script can be used to place the sections in appropriate locations.
– Jules
Sep 6 at 8:57












There were two different ways of using 32-bit pointers: far and huge. A far pointer would be limited to accessing an object of 65,520 bytes or less anywhere in memory, but code to access far pointers could be reanably efficient. Given int *p,*q;, *p++ += *q++; would generate something like les bx,[bp+12] / add word [bp+12], 2mov ax,[es:bx] / les bx,[bp+8] / add word [bp+8],2 / add [es:bx],ax. Six instructions total. Compared with using near pointers, this requires using les instead of mov...
– supercat
Sep 6 at 17:00





There were two different ways of using 32-bit pointers: far and huge. A far pointer would be limited to accessing an object of 65,520 bytes or less anywhere in memory, but code to access far pointers could be reanably efficient. Given int *p,*q;, *p++ += *q++; would generate something like les bx,[bp+12] / add word [bp+12], 2mov ax,[es:bx] / les bx,[bp+8] / add word [bp+8],2 / add [es:bx],ax. Six instructions total. Compared with using near pointers, this requires using les instead of mov...
– supercat
Sep 6 at 17:00













...and requires adding an es: prefix for the operations that use the pointers. A huge pointer could handle individual objects up to the full size of memory, but almost any address computation would either require a subroutine call or bloat the code enormously. Something really simple like p++ would become something like add byte[bp+8],2 / cmp byte [bp+8],17 / jb ok / inc word [bp+10] / sub byte [bp+8],16 / ok: [which might be practical without a subroutine call] but the best code for something like p+=someLong would probably be something like...
– supercat
Sep 6 at 17:07




...and requires adding an es: prefix for the operations that use the pointers. A huge pointer could handle individual objects up to the full size of memory, but almost any address computation would either require a subroutine call or bloat the code enormously. Something really simple like p++ would become something like add byte[bp+8],2 / cmp byte [bp+8],17 / jb ok / inc word [bp+10] / sub byte [bp+8],16 / ok: [which might be practical without a subroutine call] but the best code for something like p+=someLong would probably be something like...
– supercat
Sep 6 at 17:07












uint24_t temp = p_ofs <<4 + p_seg<<8; temp += someLong<<4; p_ofs = (temp & 240) >> 4; p_seg = temp>>8; Too painful for words. Even p+=someInt isn't all that much better. I think most complaints about the 8086 come from people who don't understand how to use far pointers effectively, since the code for them is more efficient than for any other 16-bit processor that needs to access 65,520 byte objects at arbitrary locations in RAM.
– supercat
Sep 6 at 17:26




uint24_t temp = p_ofs <<4 + p_seg<<8; temp += someLong<<4; p_ofs = (temp & 240) >> 4; p_seg = temp>>8; Too painful for words. Even p+=someInt isn't all that much better. I think most complaints about the 8086 come from people who don't understand how to use far pointers effectively, since the code for them is more efficient than for any other 16-bit processor that needs to access 65,520 byte objects at arbitrary locations in RAM.
– supercat
Sep 6 at 17:26










up vote
20
down vote













A full conforming compiler would be impractical, but it would probably be possible to write a compiler for a subset of the language which a couple of features removed:



  1. While it would be possible for a compiler to emulate recursion, code that needs to support re-entrancy would likely be much less efficient that code which doesn't. Given that the Standard imposes no requirement that compilers support recursion usefully (there's no guarantee that it be possible to nest any particular function more than one deep without bombing the stack) simply refusing to support recursion would seem more practical than generating re-entrant code for functions, and more "honest" than accepting such code but behaving in goofy fashion if functions are invoked recursively.


  2. The Standard would require that an implementation support floating-point math on values with a mantissa of greater than 32 bits. Limiting floating-point computations to a 32-bit or even 16-bit mantissa would allow them to be handled with smaller and faster code than would be possible with a standard-conforming "double".


Usable C compilers exist for microprocessors whose architecture is even less "C-friendly" such as the CDP1802 (interesting chip, but the code to accomplish something like "ptr[index] = 1234;" would take 21 instructions) so the Apollo computer, which has an INDEX instruction, doesn't look too bad by comparison, at least if code doesn't need to support re-entrant functions.






share|improve this answer




















  • Wasn't there a recognized subset of C without floating point, back in the day? (A quick web search didn't find it - but I can't remember exactly what it was called).
    – alephzero
    Sep 3 at 19:49






  • 17




    The C standard actually doesn't set any limits to floating point precision. It just says double needs to be at least as precise as float. It's just a convention that most ompulers use IEEE754 for floating point, and that says "float is 32 bits" (both mantissa and exponent).
    – tofro
    Sep 3 at 20:10






  • 6




    There's a few C compilers for microcontrollers that implement software floating point as a library and allow you to not link the floating point library if your program only use integers.
    – slebetman
    Sep 3 at 23:15






  • 3




    @slebetman: That was also the case for early PC C compilers (e.g., Microsoft QuickC and Borland Turbo C) before the x87 FPU became a standard feature.
    – dan04
    Sep 4 at 4:33






  • 1




    @dan04: Same for the Amiga / MC68851, and I guess lots of other platforms.
    – DevSolar
    Sep 4 at 7:34














up vote
20
down vote













A full conforming compiler would be impractical, but it would probably be possible to write a compiler for a subset of the language which a couple of features removed:



  1. While it would be possible for a compiler to emulate recursion, code that needs to support re-entrancy would likely be much less efficient that code which doesn't. Given that the Standard imposes no requirement that compilers support recursion usefully (there's no guarantee that it be possible to nest any particular function more than one deep without bombing the stack) simply refusing to support recursion would seem more practical than generating re-entrant code for functions, and more "honest" than accepting such code but behaving in goofy fashion if functions are invoked recursively.


  2. The Standard would require that an implementation support floating-point math on values with a mantissa of greater than 32 bits. Limiting floating-point computations to a 32-bit or even 16-bit mantissa would allow them to be handled with smaller and faster code than would be possible with a standard-conforming "double".


Usable C compilers exist for microprocessors whose architecture is even less "C-friendly" such as the CDP1802 (interesting chip, but the code to accomplish something like "ptr[index] = 1234;" would take 21 instructions) so the Apollo computer, which has an INDEX instruction, doesn't look too bad by comparison, at least if code doesn't need to support re-entrant functions.






share|improve this answer




















  • Wasn't there a recognized subset of C without floating point, back in the day? (A quick web search didn't find it - but I can't remember exactly what it was called).
    – alephzero
    Sep 3 at 19:49






  • 17




    The C standard actually doesn't set any limits to floating point precision. It just says double needs to be at least as precise as float. It's just a convention that most ompulers use IEEE754 for floating point, and that says "float is 32 bits" (both mantissa and exponent).
    – tofro
    Sep 3 at 20:10






  • 6




    There's a few C compilers for microcontrollers that implement software floating point as a library and allow you to not link the floating point library if your program only use integers.
    – slebetman
    Sep 3 at 23:15






  • 3




    @slebetman: That was also the case for early PC C compilers (e.g., Microsoft QuickC and Borland Turbo C) before the x87 FPU became a standard feature.
    – dan04
    Sep 4 at 4:33






  • 1




    @dan04: Same for the Amiga / MC68851, and I guess lots of other platforms.
    – DevSolar
    Sep 4 at 7:34












up vote
20
down vote










up vote
20
down vote









A full conforming compiler would be impractical, but it would probably be possible to write a compiler for a subset of the language which a couple of features removed:



  1. While it would be possible for a compiler to emulate recursion, code that needs to support re-entrancy would likely be much less efficient that code which doesn't. Given that the Standard imposes no requirement that compilers support recursion usefully (there's no guarantee that it be possible to nest any particular function more than one deep without bombing the stack) simply refusing to support recursion would seem more practical than generating re-entrant code for functions, and more "honest" than accepting such code but behaving in goofy fashion if functions are invoked recursively.


  2. The Standard would require that an implementation support floating-point math on values with a mantissa of greater than 32 bits. Limiting floating-point computations to a 32-bit or even 16-bit mantissa would allow them to be handled with smaller and faster code than would be possible with a standard-conforming "double".


Usable C compilers exist for microprocessors whose architecture is even less "C-friendly" such as the CDP1802 (interesting chip, but the code to accomplish something like "ptr[index] = 1234;" would take 21 instructions) so the Apollo computer, which has an INDEX instruction, doesn't look too bad by comparison, at least if code doesn't need to support re-entrant functions.






share|improve this answer












A full conforming compiler would be impractical, but it would probably be possible to write a compiler for a subset of the language which a couple of features removed:



  1. While it would be possible for a compiler to emulate recursion, code that needs to support re-entrancy would likely be much less efficient that code which doesn't. Given that the Standard imposes no requirement that compilers support recursion usefully (there's no guarantee that it be possible to nest any particular function more than one deep without bombing the stack) simply refusing to support recursion would seem more practical than generating re-entrant code for functions, and more "honest" than accepting such code but behaving in goofy fashion if functions are invoked recursively.


  2. The Standard would require that an implementation support floating-point math on values with a mantissa of greater than 32 bits. Limiting floating-point computations to a 32-bit or even 16-bit mantissa would allow them to be handled with smaller and faster code than would be possible with a standard-conforming "double".


Usable C compilers exist for microprocessors whose architecture is even less "C-friendly" such as the CDP1802 (interesting chip, but the code to accomplish something like "ptr[index] = 1234;" would take 21 instructions) so the Apollo computer, which has an INDEX instruction, doesn't look too bad by comparison, at least if code doesn't need to support re-entrant functions.







share|improve this answer












share|improve this answer



share|improve this answer










answered Sep 3 at 18:26









supercat

5,328532




5,328532











  • Wasn't there a recognized subset of C without floating point, back in the day? (A quick web search didn't find it - but I can't remember exactly what it was called).
    – alephzero
    Sep 3 at 19:49






  • 17




    The C standard actually doesn't set any limits to floating point precision. It just says double needs to be at least as precise as float. It's just a convention that most ompulers use IEEE754 for floating point, and that says "float is 32 bits" (both mantissa and exponent).
    – tofro
    Sep 3 at 20:10






  • 6




    There's a few C compilers for microcontrollers that implement software floating point as a library and allow you to not link the floating point library if your program only use integers.
    – slebetman
    Sep 3 at 23:15






  • 3




    @slebetman: That was also the case for early PC C compilers (e.g., Microsoft QuickC and Borland Turbo C) before the x87 FPU became a standard feature.
    – dan04
    Sep 4 at 4:33






  • 1




    @dan04: Same for the Amiga / MC68851, and I guess lots of other platforms.
    – DevSolar
    Sep 4 at 7:34
















  • Wasn't there a recognized subset of C without floating point, back in the day? (A quick web search didn't find it - but I can't remember exactly what it was called).
    – alephzero
    Sep 3 at 19:49






  • 17




    The C standard actually doesn't set any limits to floating point precision. It just says double needs to be at least as precise as float. It's just a convention that most ompulers use IEEE754 for floating point, and that says "float is 32 bits" (both mantissa and exponent).
    – tofro
    Sep 3 at 20:10






  • 6




    There's a few C compilers for microcontrollers that implement software floating point as a library and allow you to not link the floating point library if your program only use integers.
    – slebetman
    Sep 3 at 23:15






  • 3




    @slebetman: That was also the case for early PC C compilers (e.g., Microsoft QuickC and Borland Turbo C) before the x87 FPU became a standard feature.
    – dan04
    Sep 4 at 4:33






  • 1




    @dan04: Same for the Amiga / MC68851, and I guess lots of other platforms.
    – DevSolar
    Sep 4 at 7:34















Wasn't there a recognized subset of C without floating point, back in the day? (A quick web search didn't find it - but I can't remember exactly what it was called).
– alephzero
Sep 3 at 19:49




Wasn't there a recognized subset of C without floating point, back in the day? (A quick web search didn't find it - but I can't remember exactly what it was called).
– alephzero
Sep 3 at 19:49




17




17




The C standard actually doesn't set any limits to floating point precision. It just says double needs to be at least as precise as float. It's just a convention that most ompulers use IEEE754 for floating point, and that says "float is 32 bits" (both mantissa and exponent).
– tofro
Sep 3 at 20:10




The C standard actually doesn't set any limits to floating point precision. It just says double needs to be at least as precise as float. It's just a convention that most ompulers use IEEE754 for floating point, and that says "float is 32 bits" (both mantissa and exponent).
– tofro
Sep 3 at 20:10




6




6




There's a few C compilers for microcontrollers that implement software floating point as a library and allow you to not link the floating point library if your program only use integers.
– slebetman
Sep 3 at 23:15




There's a few C compilers for microcontrollers that implement software floating point as a library and allow you to not link the floating point library if your program only use integers.
– slebetman
Sep 3 at 23:15




3




3




@slebetman: That was also the case for early PC C compilers (e.g., Microsoft QuickC and Borland Turbo C) before the x87 FPU became a standard feature.
– dan04
Sep 4 at 4:33




@slebetman: That was also the case for early PC C compilers (e.g., Microsoft QuickC and Borland Turbo C) before the x87 FPU became a standard feature.
– dan04
Sep 4 at 4:33




1




1




@dan04: Same for the Amiga / MC68851, and I guess lots of other platforms.
– DevSolar
Sep 4 at 7:34




@dan04: Same for the Amiga / MC68851, and I guess lots of other platforms.
– DevSolar
Sep 4 at 7:34










up vote
8
down vote














Even though the AGC was invented before the C programming language, is a C compiler possible for this architecture?




To begin with it depends on the value of for :))



  • If the question is about that a compiler can be writen (on some computer) to produce code for the AGS (aka a crosscompiler), the answer is a clear Yes.


  • If it asks for a compiler running on the AGS it gets harder. Not so much for having a compiler in it's 30 kWords of program ROM, but for not so theoretical a way of inputing a source to be compiled. Here I would go for a theoretical yes, but, by all love for low level interfaces, in praxis the answer is No.



For the purposes of this question, a satisfactory compiler would support all of the C operators (including arithmetic, boolean, structure, and pointer). It would not need to support all of the standard libraries or system calls (i.e. stdio should not be a concern).




"Satisfactory" is a nice word - just not very clear. Is it satisfactionary that it for example the only data type available is a 15 bit word, or is floating point mandatory? Does it only need to follow basic K&R, or is C99 (or C11) a goal?




The compiler would be hosted on another system, not on the AGC itself.




Sounds good. So the answer is a clear Yes.



It is possible to do a C compiler (even on contemporary computer to the AGC) following K&R, even including FP, whose output can be loaded (well, wired) into the AGC. For doing FP it might have to carry a considerable large runtime (library), so C without FP and only a 15 bit integer datatype might be the prefered solution to keep as much as possible ROM for useful code.



And then there is the question asked in the title (emphasis by me) which somewhat got lost in the question text:




Would a C compiler for the Apollo Guidance Computer be plausible?




Here the answer is a clear No.



The result of a C compiler will alway be inferiour to what an Assembler programmer can squeeze out. Considering the small code space (36 kWords) and complex job to handle, Assembly might have been the only way to go.



If at all, a language more suited to system programming than C would have been used - most likely something similar to MOL-360 or PL/1 (or rather its specialized cousin PL/S).






share|improve this answer


















  • 5




    "a C compiler will alway be inferiour...": Note that optimizers can easily outsmart humans, but you are correct that humans can make assumptions that optimizers are not allowed to. On a general purpose computer, I'd say a C compiler will usually be superior to hand crafted assembly, and that's ignoring all the advantages of a higher level language over assembly (readability, etc.)
    – isanae
    Sep 4 at 1:17






  • 4




    @isanae As I see it, if a compiler outsmarts a human Assembly programmer, then he wasn'tvery smart to start with. Beside not seeing many advantages of high level languages, I still haven't seen in ~40 years of programing a single compiler produceing superiour code to what a human programmer can do (not at least due the fact that all their tricks have been devised by human programmers in the first place). Compilers only may bring advantage compared to less than good programers. But then again, these are the same using less than fiting algorithms - something no compiler can't put right again.
    – Raffzahn
    Sep 4 at 1:32






  • 6




    Ah yes, the famed Assembly Programmer. If you cannot see many advantages to high level languages compared to assembly, then we have such a fundamental disagreement and exceptionally different professional experience that I don't think we'll ever agree on who's the best optimizer ;)
    – isanae
    Sep 4 at 1:49






  • 8




    @isanae: You happen to have picked a really bad architecture to try to make your point with. The statement about the compiler output beating a decent assembly programmer wasn't really true until after the rearranging processors came out. On a fixed-order fixed-execution-cycle processor the assembly programmer tends to win.
    – Joshua
    Sep 4 at 1:58






  • 3




    @isanae: Current compilers (gcc and clang) are still terrible for ARM SIMD intrinsics (which is weird because they're very good with x86 SIMD intrinsics). Writing manually-vectorized loops in asm by hand is definitely still recommended for ARM (but not x86 or PowerPC). I totally agree with you that writing whole programs in asm is not sensible these days, but knowing asm, and how your C or C++ will compile to asm, is useful to keep in mind. (I basically can't stop myself from thinking in terms of asm even if I wanted to, except by writing in bash or perl instead of C/C++.)
    – Peter Cordes
    Sep 4 at 8:52















up vote
8
down vote














Even though the AGC was invented before the C programming language, is a C compiler possible for this architecture?




To begin with it depends on the value of for :))



  • If the question is about that a compiler can be writen (on some computer) to produce code for the AGS (aka a crosscompiler), the answer is a clear Yes.


  • If it asks for a compiler running on the AGS it gets harder. Not so much for having a compiler in it's 30 kWords of program ROM, but for not so theoretical a way of inputing a source to be compiled. Here I would go for a theoretical yes, but, by all love for low level interfaces, in praxis the answer is No.



For the purposes of this question, a satisfactory compiler would support all of the C operators (including arithmetic, boolean, structure, and pointer). It would not need to support all of the standard libraries or system calls (i.e. stdio should not be a concern).




"Satisfactory" is a nice word - just not very clear. Is it satisfactionary that it for example the only data type available is a 15 bit word, or is floating point mandatory? Does it only need to follow basic K&R, or is C99 (or C11) a goal?




The compiler would be hosted on another system, not on the AGC itself.




Sounds good. So the answer is a clear Yes.



It is possible to do a C compiler (even on contemporary computer to the AGC) following K&R, even including FP, whose output can be loaded (well, wired) into the AGC. For doing FP it might have to carry a considerable large runtime (library), so C without FP and only a 15 bit integer datatype might be the prefered solution to keep as much as possible ROM for useful code.



And then there is the question asked in the title (emphasis by me) which somewhat got lost in the question text:




Would a C compiler for the Apollo Guidance Computer be plausible?




Here the answer is a clear No.



The result of a C compiler will alway be inferiour to what an Assembler programmer can squeeze out. Considering the small code space (36 kWords) and complex job to handle, Assembly might have been the only way to go.



If at all, a language more suited to system programming than C would have been used - most likely something similar to MOL-360 or PL/1 (or rather its specialized cousin PL/S).






share|improve this answer


















  • 5




    "a C compiler will alway be inferiour...": Note that optimizers can easily outsmart humans, but you are correct that humans can make assumptions that optimizers are not allowed to. On a general purpose computer, I'd say a C compiler will usually be superior to hand crafted assembly, and that's ignoring all the advantages of a higher level language over assembly (readability, etc.)
    – isanae
    Sep 4 at 1:17






  • 4




    @isanae As I see it, if a compiler outsmarts a human Assembly programmer, then he wasn'tvery smart to start with. Beside not seeing many advantages of high level languages, I still haven't seen in ~40 years of programing a single compiler produceing superiour code to what a human programmer can do (not at least due the fact that all their tricks have been devised by human programmers in the first place). Compilers only may bring advantage compared to less than good programers. But then again, these are the same using less than fiting algorithms - something no compiler can't put right again.
    – Raffzahn
    Sep 4 at 1:32






  • 6




    Ah yes, the famed Assembly Programmer. If you cannot see many advantages to high level languages compared to assembly, then we have such a fundamental disagreement and exceptionally different professional experience that I don't think we'll ever agree on who's the best optimizer ;)
    – isanae
    Sep 4 at 1:49






  • 8




    @isanae: You happen to have picked a really bad architecture to try to make your point with. The statement about the compiler output beating a decent assembly programmer wasn't really true until after the rearranging processors came out. On a fixed-order fixed-execution-cycle processor the assembly programmer tends to win.
    – Joshua
    Sep 4 at 1:58






  • 3




    @isanae: Current compilers (gcc and clang) are still terrible for ARM SIMD intrinsics (which is weird because they're very good with x86 SIMD intrinsics). Writing manually-vectorized loops in asm by hand is definitely still recommended for ARM (but not x86 or PowerPC). I totally agree with you that writing whole programs in asm is not sensible these days, but knowing asm, and how your C or C++ will compile to asm, is useful to keep in mind. (I basically can't stop myself from thinking in terms of asm even if I wanted to, except by writing in bash or perl instead of C/C++.)
    – Peter Cordes
    Sep 4 at 8:52













up vote
8
down vote










up vote
8
down vote










Even though the AGC was invented before the C programming language, is a C compiler possible for this architecture?




To begin with it depends on the value of for :))



  • If the question is about that a compiler can be writen (on some computer) to produce code for the AGS (aka a crosscompiler), the answer is a clear Yes.


  • If it asks for a compiler running on the AGS it gets harder. Not so much for having a compiler in it's 30 kWords of program ROM, but for not so theoretical a way of inputing a source to be compiled. Here I would go for a theoretical yes, but, by all love for low level interfaces, in praxis the answer is No.



For the purposes of this question, a satisfactory compiler would support all of the C operators (including arithmetic, boolean, structure, and pointer). It would not need to support all of the standard libraries or system calls (i.e. stdio should not be a concern).




"Satisfactory" is a nice word - just not very clear. Is it satisfactionary that it for example the only data type available is a 15 bit word, or is floating point mandatory? Does it only need to follow basic K&R, or is C99 (or C11) a goal?




The compiler would be hosted on another system, not on the AGC itself.




Sounds good. So the answer is a clear Yes.



It is possible to do a C compiler (even on contemporary computer to the AGC) following K&R, even including FP, whose output can be loaded (well, wired) into the AGC. For doing FP it might have to carry a considerable large runtime (library), so C without FP and only a 15 bit integer datatype might be the prefered solution to keep as much as possible ROM for useful code.



And then there is the question asked in the title (emphasis by me) which somewhat got lost in the question text:




Would a C compiler for the Apollo Guidance Computer be plausible?




Here the answer is a clear No.



The result of a C compiler will alway be inferiour to what an Assembler programmer can squeeze out. Considering the small code space (36 kWords) and complex job to handle, Assembly might have been the only way to go.



If at all, a language more suited to system programming than C would have been used - most likely something similar to MOL-360 or PL/1 (or rather its specialized cousin PL/S).






share|improve this answer















Even though the AGC was invented before the C programming language, is a C compiler possible for this architecture?




To begin with it depends on the value of for :))



  • If the question is about that a compiler can be writen (on some computer) to produce code for the AGS (aka a crosscompiler), the answer is a clear Yes.


  • If it asks for a compiler running on the AGS it gets harder. Not so much for having a compiler in it's 30 kWords of program ROM, but for not so theoretical a way of inputing a source to be compiled. Here I would go for a theoretical yes, but, by all love for low level interfaces, in praxis the answer is No.



For the purposes of this question, a satisfactory compiler would support all of the C operators (including arithmetic, boolean, structure, and pointer). It would not need to support all of the standard libraries or system calls (i.e. stdio should not be a concern).




"Satisfactory" is a nice word - just not very clear. Is it satisfactionary that it for example the only data type available is a 15 bit word, or is floating point mandatory? Does it only need to follow basic K&R, or is C99 (or C11) a goal?




The compiler would be hosted on another system, not on the AGC itself.




Sounds good. So the answer is a clear Yes.



It is possible to do a C compiler (even on contemporary computer to the AGC) following K&R, even including FP, whose output can be loaded (well, wired) into the AGC. For doing FP it might have to carry a considerable large runtime (library), so C without FP and only a 15 bit integer datatype might be the prefered solution to keep as much as possible ROM for useful code.



And then there is the question asked in the title (emphasis by me) which somewhat got lost in the question text:




Would a C compiler for the Apollo Guidance Computer be plausible?




Here the answer is a clear No.



The result of a C compiler will alway be inferiour to what an Assembler programmer can squeeze out. Considering the small code space (36 kWords) and complex job to handle, Assembly might have been the only way to go.



If at all, a language more suited to system programming than C would have been used - most likely something similar to MOL-360 or PL/1 (or rather its specialized cousin PL/S).







share|improve this answer














share|improve this answer



share|improve this answer








edited Sep 4 at 1:45

























answered Sep 4 at 0:08









Raffzahn

32.8k471131




32.8k471131







  • 5




    "a C compiler will alway be inferiour...": Note that optimizers can easily outsmart humans, but you are correct that humans can make assumptions that optimizers are not allowed to. On a general purpose computer, I'd say a C compiler will usually be superior to hand crafted assembly, and that's ignoring all the advantages of a higher level language over assembly (readability, etc.)
    – isanae
    Sep 4 at 1:17






  • 4




    @isanae As I see it, if a compiler outsmarts a human Assembly programmer, then he wasn'tvery smart to start with. Beside not seeing many advantages of high level languages, I still haven't seen in ~40 years of programing a single compiler produceing superiour code to what a human programmer can do (not at least due the fact that all their tricks have been devised by human programmers in the first place). Compilers only may bring advantage compared to less than good programers. But then again, these are the same using less than fiting algorithms - something no compiler can't put right again.
    – Raffzahn
    Sep 4 at 1:32






  • 6




    Ah yes, the famed Assembly Programmer. If you cannot see many advantages to high level languages compared to assembly, then we have such a fundamental disagreement and exceptionally different professional experience that I don't think we'll ever agree on who's the best optimizer ;)
    – isanae
    Sep 4 at 1:49






  • 8




    @isanae: You happen to have picked a really bad architecture to try to make your point with. The statement about the compiler output beating a decent assembly programmer wasn't really true until after the rearranging processors came out. On a fixed-order fixed-execution-cycle processor the assembly programmer tends to win.
    – Joshua
    Sep 4 at 1:58






  • 3




    @isanae: Current compilers (gcc and clang) are still terrible for ARM SIMD intrinsics (which is weird because they're very good with x86 SIMD intrinsics). Writing manually-vectorized loops in asm by hand is definitely still recommended for ARM (but not x86 or PowerPC). I totally agree with you that writing whole programs in asm is not sensible these days, but knowing asm, and how your C or C++ will compile to asm, is useful to keep in mind. (I basically can't stop myself from thinking in terms of asm even if I wanted to, except by writing in bash or perl instead of C/C++.)
    – Peter Cordes
    Sep 4 at 8:52













  • 5




    "a C compiler will alway be inferiour...": Note that optimizers can easily outsmart humans, but you are correct that humans can make assumptions that optimizers are not allowed to. On a general purpose computer, I'd say a C compiler will usually be superior to hand crafted assembly, and that's ignoring all the advantages of a higher level language over assembly (readability, etc.)
    – isanae
    Sep 4 at 1:17






  • 4




    @isanae As I see it, if a compiler outsmarts a human Assembly programmer, then he wasn'tvery smart to start with. Beside not seeing many advantages of high level languages, I still haven't seen in ~40 years of programing a single compiler produceing superiour code to what a human programmer can do (not at least due the fact that all their tricks have been devised by human programmers in the first place). Compilers only may bring advantage compared to less than good programers. But then again, these are the same using less than fiting algorithms - something no compiler can't put right again.
    – Raffzahn
    Sep 4 at 1:32






  • 6




    Ah yes, the famed Assembly Programmer. If you cannot see many advantages to high level languages compared to assembly, then we have such a fundamental disagreement and exceptionally different professional experience that I don't think we'll ever agree on who's the best optimizer ;)
    – isanae
    Sep 4 at 1:49






  • 8




    @isanae: You happen to have picked a really bad architecture to try to make your point with. The statement about the compiler output beating a decent assembly programmer wasn't really true until after the rearranging processors came out. On a fixed-order fixed-execution-cycle processor the assembly programmer tends to win.
    – Joshua
    Sep 4 at 1:58






  • 3




    @isanae: Current compilers (gcc and clang) are still terrible for ARM SIMD intrinsics (which is weird because they're very good with x86 SIMD intrinsics). Writing manually-vectorized loops in asm by hand is definitely still recommended for ARM (but not x86 or PowerPC). I totally agree with you that writing whole programs in asm is not sensible these days, but knowing asm, and how your C or C++ will compile to asm, is useful to keep in mind. (I basically can't stop myself from thinking in terms of asm even if I wanted to, except by writing in bash or perl instead of C/C++.)
    – Peter Cordes
    Sep 4 at 8:52








5




5




"a C compiler will alway be inferiour...": Note that optimizers can easily outsmart humans, but you are correct that humans can make assumptions that optimizers are not allowed to. On a general purpose computer, I'd say a C compiler will usually be superior to hand crafted assembly, and that's ignoring all the advantages of a higher level language over assembly (readability, etc.)
– isanae
Sep 4 at 1:17




"a C compiler will alway be inferiour...": Note that optimizers can easily outsmart humans, but you are correct that humans can make assumptions that optimizers are not allowed to. On a general purpose computer, I'd say a C compiler will usually be superior to hand crafted assembly, and that's ignoring all the advantages of a higher level language over assembly (readability, etc.)
– isanae
Sep 4 at 1:17




4




4




@isanae As I see it, if a compiler outsmarts a human Assembly programmer, then he wasn'tvery smart to start with. Beside not seeing many advantages of high level languages, I still haven't seen in ~40 years of programing a single compiler produceing superiour code to what a human programmer can do (not at least due the fact that all their tricks have been devised by human programmers in the first place). Compilers only may bring advantage compared to less than good programers. But then again, these are the same using less than fiting algorithms - something no compiler can't put right again.
– Raffzahn
Sep 4 at 1:32




@isanae As I see it, if a compiler outsmarts a human Assembly programmer, then he wasn'tvery smart to start with. Beside not seeing many advantages of high level languages, I still haven't seen in ~40 years of programing a single compiler produceing superiour code to what a human programmer can do (not at least due the fact that all their tricks have been devised by human programmers in the first place). Compilers only may bring advantage compared to less than good programers. But then again, these are the same using less than fiting algorithms - something no compiler can't put right again.
– Raffzahn
Sep 4 at 1:32




6




6




Ah yes, the famed Assembly Programmer. If you cannot see many advantages to high level languages compared to assembly, then we have such a fundamental disagreement and exceptionally different professional experience that I don't think we'll ever agree on who's the best optimizer ;)
– isanae
Sep 4 at 1:49




Ah yes, the famed Assembly Programmer. If you cannot see many advantages to high level languages compared to assembly, then we have such a fundamental disagreement and exceptionally different professional experience that I don't think we'll ever agree on who's the best optimizer ;)
– isanae
Sep 4 at 1:49




8




8




@isanae: You happen to have picked a really bad architecture to try to make your point with. The statement about the compiler output beating a decent assembly programmer wasn't really true until after the rearranging processors came out. On a fixed-order fixed-execution-cycle processor the assembly programmer tends to win.
– Joshua
Sep 4 at 1:58




@isanae: You happen to have picked a really bad architecture to try to make your point with. The statement about the compiler output beating a decent assembly programmer wasn't really true until after the rearranging processors came out. On a fixed-order fixed-execution-cycle processor the assembly programmer tends to win.
– Joshua
Sep 4 at 1:58




3




3




@isanae: Current compilers (gcc and clang) are still terrible for ARM SIMD intrinsics (which is weird because they're very good with x86 SIMD intrinsics). Writing manually-vectorized loops in asm by hand is definitely still recommended for ARM (but not x86 or PowerPC). I totally agree with you that writing whole programs in asm is not sensible these days, but knowing asm, and how your C or C++ will compile to asm, is useful to keep in mind. (I basically can't stop myself from thinking in terms of asm even if I wanted to, except by writing in bash or perl instead of C/C++.)
– Peter Cordes
Sep 4 at 8:52





@isanae: Current compilers (gcc and clang) are still terrible for ARM SIMD intrinsics (which is weird because they're very good with x86 SIMD intrinsics). Writing manually-vectorized loops in asm by hand is definitely still recommended for ARM (but not x86 or PowerPC). I totally agree with you that writing whole programs in asm is not sensible these days, but knowing asm, and how your C or C++ will compile to asm, is useful to keep in mind. (I basically can't stop myself from thinking in terms of asm even if I wanted to, except by writing in bash or perl instead of C/C++.)
– Peter Cordes
Sep 4 at 8:52


















 

draft saved


draft discarded















































 


draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7466%2fwould-a-c-compiler-for-the-apollo-guidance-computer-be-plausible%23new-answer', 'question_page');

);

Post as a guest













































































Comments

Popular posts from this blog

Long meetings (6-7 hours a day): Being “babysat” by supervisor

Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

Confectionery