Can one isolate processes on a 8086?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
8
down vote

favorite
1












I've read that modern OSes rely on hardware-powered process isolation to prevent processes (and/or users) from clobbering each others' RAM. But on Intel processors, this hardware was first included in the 80286 (protected mode), so Linux required a minimum of a 80386 to run.



Was there a way to run a memory safe POSIX on a 8086 or 80286?







share|improve this question


















  • 1




    well on pure SW solution the only safe option I can think of would be interpreted processes ... so not compiled executables but interpretable source code would run instead... In such OS/Environment you could create memory protection but the result would be really slow .... in comparison to HW implementation + compiled executables
    – Spektre
    Aug 8 at 20:54










  • @Spektre - you can make a system safely run compiled code without hardware protection if you have a single compiler that's always used (so compile on installation, perhaps from bytecode), you trust that compiler not to have any bugs (!), and the language implemented by the compiler is memory safe. Recent examples of this approach include Microsoft Singularity (where the compiler was an extended version of C#) and a number of systems based on Java.
    – Jules
    Aug 8 at 21:04







  • 2




    @Jules: And not yet really "retro", but much older than your examples: the IBM AS/400. The AS/400 actually does both things you cite: native OS/400 code is delivered in bytecode and compiled by the OS on first execution. POSIX code is run in the PASE (Portable Application Solutions Environment, although the P is also often interpreted as "POSIX"), which provides a partially interpreted, partially cleverly implemented POSIX API.
    – Jörg W Mittag
    Aug 8 at 23:25















up vote
8
down vote

favorite
1












I've read that modern OSes rely on hardware-powered process isolation to prevent processes (and/or users) from clobbering each others' RAM. But on Intel processors, this hardware was first included in the 80286 (protected mode), so Linux required a minimum of a 80386 to run.



Was there a way to run a memory safe POSIX on a 8086 or 80286?







share|improve this question


















  • 1




    well on pure SW solution the only safe option I can think of would be interpreted processes ... so not compiled executables but interpretable source code would run instead... In such OS/Environment you could create memory protection but the result would be really slow .... in comparison to HW implementation + compiled executables
    – Spektre
    Aug 8 at 20:54










  • @Spektre - you can make a system safely run compiled code without hardware protection if you have a single compiler that's always used (so compile on installation, perhaps from bytecode), you trust that compiler not to have any bugs (!), and the language implemented by the compiler is memory safe. Recent examples of this approach include Microsoft Singularity (where the compiler was an extended version of C#) and a number of systems based on Java.
    – Jules
    Aug 8 at 21:04







  • 2




    @Jules: And not yet really "retro", but much older than your examples: the IBM AS/400. The AS/400 actually does both things you cite: native OS/400 code is delivered in bytecode and compiled by the OS on first execution. POSIX code is run in the PASE (Portable Application Solutions Environment, although the P is also often interpreted as "POSIX"), which provides a partially interpreted, partially cleverly implemented POSIX API.
    – Jörg W Mittag
    Aug 8 at 23:25













up vote
8
down vote

favorite
1









up vote
8
down vote

favorite
1






1





I've read that modern OSes rely on hardware-powered process isolation to prevent processes (and/or users) from clobbering each others' RAM. But on Intel processors, this hardware was first included in the 80286 (protected mode), so Linux required a minimum of a 80386 to run.



Was there a way to run a memory safe POSIX on a 8086 or 80286?







share|improve this question














I've read that modern OSes rely on hardware-powered process isolation to prevent processes (and/or users) from clobbering each others' RAM. But on Intel processors, this hardware was first included in the 80286 (protected mode), so Linux required a minimum of a 80386 to run.



Was there a way to run a memory safe POSIX on a 8086 or 80286?









share|improve this question













share|improve this question




share|improve this question








edited Aug 9 at 14:54









Toby Speight

19210




19210










asked Aug 8 at 20:05









multics

15615




15615







  • 1




    well on pure SW solution the only safe option I can think of would be interpreted processes ... so not compiled executables but interpretable source code would run instead... In such OS/Environment you could create memory protection but the result would be really slow .... in comparison to HW implementation + compiled executables
    – Spektre
    Aug 8 at 20:54










  • @Spektre - you can make a system safely run compiled code without hardware protection if you have a single compiler that's always used (so compile on installation, perhaps from bytecode), you trust that compiler not to have any bugs (!), and the language implemented by the compiler is memory safe. Recent examples of this approach include Microsoft Singularity (where the compiler was an extended version of C#) and a number of systems based on Java.
    – Jules
    Aug 8 at 21:04







  • 2




    @Jules: And not yet really "retro", but much older than your examples: the IBM AS/400. The AS/400 actually does both things you cite: native OS/400 code is delivered in bytecode and compiled by the OS on first execution. POSIX code is run in the PASE (Portable Application Solutions Environment, although the P is also often interpreted as "POSIX"), which provides a partially interpreted, partially cleverly implemented POSIX API.
    – Jörg W Mittag
    Aug 8 at 23:25













  • 1




    well on pure SW solution the only safe option I can think of would be interpreted processes ... so not compiled executables but interpretable source code would run instead... In such OS/Environment you could create memory protection but the result would be really slow .... in comparison to HW implementation + compiled executables
    – Spektre
    Aug 8 at 20:54










  • @Spektre - you can make a system safely run compiled code without hardware protection if you have a single compiler that's always used (so compile on installation, perhaps from bytecode), you trust that compiler not to have any bugs (!), and the language implemented by the compiler is memory safe. Recent examples of this approach include Microsoft Singularity (where the compiler was an extended version of C#) and a number of systems based on Java.
    – Jules
    Aug 8 at 21:04







  • 2




    @Jules: And not yet really "retro", but much older than your examples: the IBM AS/400. The AS/400 actually does both things you cite: native OS/400 code is delivered in bytecode and compiled by the OS on first execution. POSIX code is run in the PASE (Portable Application Solutions Environment, although the P is also often interpreted as "POSIX"), which provides a partially interpreted, partially cleverly implemented POSIX API.
    – Jörg W Mittag
    Aug 8 at 23:25








1




1




well on pure SW solution the only safe option I can think of would be interpreted processes ... so not compiled executables but interpretable source code would run instead... In such OS/Environment you could create memory protection but the result would be really slow .... in comparison to HW implementation + compiled executables
– Spektre
Aug 8 at 20:54




well on pure SW solution the only safe option I can think of would be interpreted processes ... so not compiled executables but interpretable source code would run instead... In such OS/Environment you could create memory protection but the result would be really slow .... in comparison to HW implementation + compiled executables
– Spektre
Aug 8 at 20:54












@Spektre - you can make a system safely run compiled code without hardware protection if you have a single compiler that's always used (so compile on installation, perhaps from bytecode), you trust that compiler not to have any bugs (!), and the language implemented by the compiler is memory safe. Recent examples of this approach include Microsoft Singularity (where the compiler was an extended version of C#) and a number of systems based on Java.
– Jules
Aug 8 at 21:04





@Spektre - you can make a system safely run compiled code without hardware protection if you have a single compiler that's always used (so compile on installation, perhaps from bytecode), you trust that compiler not to have any bugs (!), and the language implemented by the compiler is memory safe. Recent examples of this approach include Microsoft Singularity (where the compiler was an extended version of C#) and a number of systems based on Java.
– Jules
Aug 8 at 21:04





2




2




@Jules: And not yet really "retro", but much older than your examples: the IBM AS/400. The AS/400 actually does both things you cite: native OS/400 code is delivered in bytecode and compiled by the OS on first execution. POSIX code is run in the PASE (Portable Application Solutions Environment, although the P is also often interpreted as "POSIX"), which provides a partially interpreted, partially cleverly implemented POSIX API.
– Jörg W Mittag
Aug 8 at 23:25





@Jules: And not yet really "retro", but much older than your examples: the IBM AS/400. The AS/400 actually does both things you cite: native OS/400 code is delivered in bytecode and compiled by the OS on first execution. POSIX code is run in the PASE (Portable Application Solutions Environment, although the P is also often interpreted as "POSIX"), which provides a partially interpreted, partially cleverly implemented POSIX API.
– Jörg W Mittag
Aug 8 at 23:25











8 Answers
8






active

oldest

votes

















up vote
23
down vote













The short answer would be "No", since there is no way to prevent a user process from accessing privileged address space (of the OS or other processes) without some form of memory protection. Usually, this memory protection has to be implemented in hardware of the processor, such as you pointed out with 80286 protected mode.



Some alternatives would be:



  1. A hardware implementation of memory protection outside of the 8086 microprocessor. This was done, for example, with the Altos Series.

  2. A strict software convention for user processes that would (barring coding bugs) ensure they only access parts of memory they specifically "owned".

Since POSIX is built on the older C-standard for heap usage (i.e. malloc/free), it would be possible to have user processes that cooperate on an 8086, through these API's, to guarantee they only access their own memory. Of course, bugs being a reality, this would not be as good as hardware protection of memory. Systems such as the Amiga and Macintosh (using the Motorola 68000) that used this strategy of software convention suffered with stability problems created by memory access bugs.






share|improve this answer


















  • 1




    "A hardware implementation of memory protection outside of the 8086 microprocessor. I don't know of any systems that did this." ... I just went looking, on the basis that it seemed that it would be reasonably simple to implement such a system so figured somebody must have done it, and found this system (also described in Wikipedia here).
    – Jules
    Aug 8 at 20:54










  • @Jules Excellent find! I added a link to the answer.
    – Brian H
    Aug 8 at 21:05










  • You know you can just type in a memory-smashing x86 binary into vi right?
    – Joshua
    Aug 9 at 4:04






  • 1




    Oh. I misread the answer as implying you could have actual security by restricting what binaries can be run.
    – Joshua
    Aug 9 at 13:27






  • 1




    The Wikipedia page about Xenix also mentions Seattle Computer Products and Intel's "System 86" which seems to imply the later also had an MMU (Intel did sell special boards with MMU).
    – DarkDust
    Aug 10 at 7:59

















up vote
15
down vote













A computer using an 8086 can provide memory protection by using an external memory management unit. This would be a chip or a circuit that sits between the CPU and the memory and provides an additional layer of memory translation, sends interrupts if out-of-range memory is accessed, and so on. I don't know if this was commonly done on the 8086 (I've never seen such a system described, but then I've not looked for one either), but was very common for workstations based on early revisions of the 68000.



(Edit: at least some systems were produced that used this approach, although as @RossRidge points out in the comments it was a little easier on the 68000 due to specific support designed into the processor, which is probably why it was more common there.)



For an 80286, the standard 286 protected mode provides all the isolation that you'd need to run a POSIX compliant operating system with memory safety.



(It wouldn't be a very good POSIX system, because memory allocations would need to be limited to 64K to fit inside segment limits, but POSIX allows for sizes to be limited as low as _POSIX_SSIZE_MAX, which is defined as 32KiB, so this is fine)



There have been a number of Unix-like operating system that run on the 8086 and 80286, including Minix, which is usually considered the forerunner of Linux (it is the system that Torvalds used at the time he developed the first versions of Linux and influenced the early development quite a bit) and Xenix. There is also a port of Linux to 16-bit systems called ELKS, although I don't know whether it supports memory protection or not (looking at the source suggests it probably does, but I've never really done anything with it so can't be sure).






share|improve this answer


















  • 2




    The 68000 had one feature that made MMUs work that the 8086 didn't have, separate user and supervisor modes. There's no way for an MMU on a 8086 to distinguish between memory accesses of different processes. I don't know if POSIX actually requires process isolation. MINIX ran on 8086s.
    – Ross Ridge
    Aug 8 at 20:29







  • 1




    @RossRidge - an 8086 MMU can easily distinguish the location of running code (the S1 and S0 lines are both low during code fetch cycles), so could identify privileged code from that. It could also use the same approach to prevent arbitrary jumps into privileged code. It'd be quite tricky, I agree, but not beyond possibility.
    – Jules
    Aug 8 at 20:32






  • 1




    (in fact, you'd probably set a specific location in memory that when fetched as executable code triggers a switch to supervisor mode; that'd be fairly simple to implement; then you'd vector system calls and interrupts through that point to make sure they ran as supervisor mode. Dammit... going to have build an 8086 system with memory protection now. As if I don't have enough projects!).
    – Jules
    Aug 8 at 20:37







  • 1




    @Jules: I think the best approach would probably be to make the bottom chunk of memory read-only in user tasks, and have any interrupt that occurs while in user state trigger an NMI for some number of cycles and then switch to supervisor state; the NMI handler would run from the initially-read-only chunk of storage at the bottom of the address space while it starts saving registers to a nearby chunk of storage that belongs to the task, but which it would be allowed to overwrite. If a user process leaves SS/SP in a garbage state, switching to the OS and back may execute code...
    – supercat
    Aug 8 at 21:49






  • 2




    @PeterI: A solution for that was to have a system with two 68000 processors, only one of which was allowed to run at any given time. A page fault would stall one CPU and wake up the other, which could then process it. Perhaps a similar approach could work using the 8086. Probably easier than any other approach for allowing tasks to have some of their memory swapped to disk out at any given time and having it "transparently" get swapped back in, but a system may be useful even without such abilities.
    – supercat
    Aug 9 at 18:24

















up vote
6
down vote













Technically yes because the 8086 instruction set is Turing-complete. Here is Linux running very slowly on an ARMv5 emulation on an 8-bit RISC microcontroller (also mentioned here). But if you want process isolation, I would look for other, less extreme solutions first!






share|improve this answer
















  • 2




    Ah, the Turing tarpit argument. Nice. :)
    – Jules
    Aug 8 at 22:25










  • @Jules For completeness if nothing else.
    – traal
    Aug 8 at 23:36






  • 3




    No, since 8086 is a finite-state machine, not being Turing-complete. Strictly speaking, of course. ;-)
    – user49915
    Aug 9 at 0:09







  • 1




    The argument "it's Turing-complete, so everything can be done" is not even true: Turing-completeness does not even say this: A Turing-complete programming language for example is not necessarily able to access files on the hard disk (counter-example: Brainfuck).
    – Martin Rosenau
    Aug 9 at 5:25






  • 1




    @MartinRosenau That's a fundamental misunderstanding of Turing completeness. For a system to be Turing complete, it only needs to be able to compute anything. It doesn't need any other capabilities.
    – duskwuff
    Aug 9 at 17:55

















up vote
4
down vote













I would say no. Even if external memory protection hardware is added, the processor lacks the concept of a user and supervisor or privileged state. As a result, there's no way to stop a program from disabling interrupts or accessing I/O ports, like those in the MMU.



Now if we set aside the need to isolate a malicious program, the MMU might be enough. Only problem with that thinking is that buggy programs can be pretty malicious even if the author is not.



So it would seem that we are back to no!



If you want process isolation, you need something more modern and a well written operating kernel.






share|improve this answer
















  • 1




    An 8086 MMU would have to track whether code executing was privileged or not by itself, but that would likely not be hard. It could prevent access to IO ports simply by not allowing the processor to use them in user mode; similarly, disabling interrupts isn't actually a huge issue: it could use an NMI instead of a regular interrupt; privileged code could disable and enable the use of NMI by communicating with the MMU rather than using the internal processor interrupt controls. Messy, but workable.
    – Jules
    Aug 8 at 21:08






  • 2




    To clarify, the 8086 does not have a user mode, so there's no way for the mmu to know what privilege level applies. Further, the vast majority of peripherals will simply not work using the NMI. Plus there are times (like critical sections) where it is essential to mask interrupts. The best answer I have seen would be to run the code in some sort of virtual machine which would enforce the needed rules. At that point I don't see the point of using an 8086. One might as well run the emulator in a browser using JavaScript. It would run a lot quicker!
    – Peter Camilleri
    Aug 9 at 0:33






  • 1




    there may not be a processor supported user/supervisor mode switch, but there's no reason an MMU can't implement one itself, switching to supervisor mode (for example) either on a hardware interrupt or when execution reaches a certain point in the code (which can be easily determined because the 8086 has status lines that indicate whether a read operation is for data or code). Masking interrupts is not the only way to provide critical sections, either (e.g. the LOCK# signal could be used to provide atomic operations by causing the MMU to forcible delay any pending interrupt)
    – Jules
    Aug 9 at 7:22






  • 1




    I'm also not sure I understand what you mean by "the vast majority of peripherals will simply not work using the NMI" -- I don't see how the peripheral would care (or even identify) which of the two possible methods is used to identify to the processor that attention is required. An NMI and an external register used to indicate which IRQ line caused the interrupt is entirely equivalent, as far as I can see, to the standard INTR line behaviour. Other than not signalling the INTA line, and only providing a single vector, the behaviour of the NMI line on the 8086 is exactly equivalent to INTR...
    – Jules
    Aug 9 at 7:31







  • 1




    ... and both of those can be trivially emulated by a simple piece of hardware and a small addition to the operating system code.
    – Jules
    Aug 9 at 7:34

















up vote
3
down vote













The 8086,80186,8088 and 80188 all lacked any real memory protection although switching segment registers would protect against accidental overwriting. The 80286 did support protection so a POSIX OS with hardware enforced memory protection could be written.



The NEC V20 and V30 were 8086 clones with a 8080 emulation mode. Since it wouldn't be able to address more than 64K in 8080 mode one could presumably write a POSIX OS where the kernel ran in 8086 mode while userspace ran in 8080 mode switching between them to make system calls. Presumably it would still be possible to address the first 256 I/O ports directly which would mean user process could talk directly to some fairly important hardware if the V20/V30 were embedded in a standard IBM PC clone.






share|improve this answer





























    up vote
    2
    down vote













    The 8086 is a 16-bit processor. One possibility for implementing some form of process isolation is to use the processor's segment registers (CS, DS, SS, ES). These allow a process's stack (SS), heap (DS, ES), and code (CS) reside in specific 64kB areas of a 1MB address space. This works by left shifting the 16-bit segment register by four bits and adding to that the 16-bit stack pointer (SS << 4 + SP), instruction pointer (CS << 4 + IP), or data address (e.g. CS << 4 + SI), to obtain the 20 bits of the physical address.



    Thus, through a suitable segment register setup one can isolate a process to at most 64kB, provided the process follows the convention of not altering the segment registers. For the requirements of C programs, where the heap and the stack must be addressable through the same 16-bit pointers, this convention restricts them to 64kB of data and 64kB of code. Although this might sound overly restrictive, remember that early Unix run on a PDP-11 with 64kB of RAM. Consequently, providing a 1MB memory for multiple processes with up to 64kB of code and 64kB of data is more than generous.



    Furthermore, by manipulating segment registers and copying memory regions, a supervisor program can dynamically readjust memory regions as processes are created and destroyed in a way that's transparent to running processes. Early versions of Andrew Tannembaum's MINIX operating system relied on some of these ideas.






    share|improve this answer


















    • 2




      In order for this to provide real protection the code must not be allowed to change segment registers, disable interrupts, or invoke IN and OUT instructions. Those restrictions cannot be enforced in hardware but validation of the user code before transferring control would be possible. Similar just-in-time code validation is used in other places such as Java byte code, some versions of VM Ware, and the NaCl sandbox.
      – kasperd
      Aug 9 at 23:05






    • 1




      Minor nitpick: C does not have a requirement that the heap and stack be addressable through the same pointers. An implementation that chooses to make pointers only 16 bits wide must have coincident stack and data segments because there is no way to encode the required segment and a segment address in 16 bits but you don't have to have 16 bit pointers on an 8086.
      – JeremyP
      Aug 29 at 11:30

















    up vote
    1
    down vote













    Yes, but it's not easy. There are at least two possible approaches:



    Option 1: software virutalization



    This one is the canonical/classical solution. Essentially, you write an emulator/interpreter for some sort of virtual machine that does have kernel/user privilege modes and memory protection. You need to ensure (or assume) your interpreter has no vm-escape bugs.



    Option 2: validating programs



    Write the program-loader not to accept arbitrary 8086 machine code, but instead only a highly structured subset with enforcement of memory safety. This requires designing such a subset, and again you need to ensure or assume your implementation doesn't have bugs that break the necessary invariants.



    Either way, to do POSIX or even something POSIX-like you're going to need lots of supplemental memory. There's no way to implement POSIX in the main memory size supported by the 8086, and both of these options will greatly further increase the required memory (and decrease speed).






    share|improve this answer
















    • 2




      "There's no way to implement POSIX in the main memory size supported by the 8086" ... this is almost certainly incorrect. Xenix ran on the 8086 and while it was not POSIX compliant it did include a large majority of POSIX's most complex functions. The "ELKS" port of Linux (see my answer above) also includes a large proportion of POSIX support, and reportedly runs in < 256KiB, including X11 support.
      – Jules
      Aug 9 at 20:21






    • 1




      ... furthermore, POSIX is a formalisation and extension of the System V Interface Definition, which was based on System V Release 2. SVR2's target machine was the DEC VAX 11/780, and I believe ran on the minimum configuration of that system, which is to say in 128KiB RAM. AT&T's 3B1 minicomputer (designed to run SVR3, the release of Unix that was most recent at the time POSIX was written, I believed) was available in a 512KiB configuration. I see nothing that suggests 1MiB of RAM isn't plenty for supporting full POSIX compliance (not to mention that bank switching could easily extend that).
      – Jules
      Aug 9 at 21:58











    • @Jules: I don't have the numbers in front of me right now, but having implemented much of it myself, I've done some casual estimates on lower bounds for the possible size, and it doesn't look good. On top of that (which didn't consider 8086 limitations), with POSIX requiring at least 32-bit int, just the arithmetic and argument-passing overhead is going to blow up size a good bit. And with no fpu, the math library and softfloat will be quite large.
      – R..
      Aug 10 at 2:36






    • 1




      32-bit int is only required by POSIX.1-2001, I believe. Earlier versions allowed 16-bit ints. I have here a copy of an 8086 "math.h" library including software emulation which is less than 20KiB in size. Admittedly, it doesn't contain all of the functions in the POSIX libm (it only has double versions and not float, and is missing a few of the less common functions). The full version would, I suspect, fit in around 50KiB, which isn't exactly a huge problem.
      – Jules
      Aug 10 at 7:40

















    up vote
    0
    down vote













    The Minix operating system implemented Virtual Memory Management on 8086 in software. Minix source code is available.






    share|improve this answer




















      Your Answer







      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "648"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: false,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













       

      draft saved


      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7222%2fcan-one-isolate-processes-on-a-8086%23new-answer', 'question_page');

      );

      Post as a guest






























      8 Answers
      8






      active

      oldest

      votes








      8 Answers
      8






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      23
      down vote













      The short answer would be "No", since there is no way to prevent a user process from accessing privileged address space (of the OS or other processes) without some form of memory protection. Usually, this memory protection has to be implemented in hardware of the processor, such as you pointed out with 80286 protected mode.



      Some alternatives would be:



      1. A hardware implementation of memory protection outside of the 8086 microprocessor. This was done, for example, with the Altos Series.

      2. A strict software convention for user processes that would (barring coding bugs) ensure they only access parts of memory they specifically "owned".

      Since POSIX is built on the older C-standard for heap usage (i.e. malloc/free), it would be possible to have user processes that cooperate on an 8086, through these API's, to guarantee they only access their own memory. Of course, bugs being a reality, this would not be as good as hardware protection of memory. Systems such as the Amiga and Macintosh (using the Motorola 68000) that used this strategy of software convention suffered with stability problems created by memory access bugs.






      share|improve this answer


















      • 1




        "A hardware implementation of memory protection outside of the 8086 microprocessor. I don't know of any systems that did this." ... I just went looking, on the basis that it seemed that it would be reasonably simple to implement such a system so figured somebody must have done it, and found this system (also described in Wikipedia here).
        – Jules
        Aug 8 at 20:54










      • @Jules Excellent find! I added a link to the answer.
        – Brian H
        Aug 8 at 21:05










      • You know you can just type in a memory-smashing x86 binary into vi right?
        – Joshua
        Aug 9 at 4:04






      • 1




        Oh. I misread the answer as implying you could have actual security by restricting what binaries can be run.
        – Joshua
        Aug 9 at 13:27






      • 1




        The Wikipedia page about Xenix also mentions Seattle Computer Products and Intel's "System 86" which seems to imply the later also had an MMU (Intel did sell special boards with MMU).
        – DarkDust
        Aug 10 at 7:59














      up vote
      23
      down vote













      The short answer would be "No", since there is no way to prevent a user process from accessing privileged address space (of the OS or other processes) without some form of memory protection. Usually, this memory protection has to be implemented in hardware of the processor, such as you pointed out with 80286 protected mode.



      Some alternatives would be:



      1. A hardware implementation of memory protection outside of the 8086 microprocessor. This was done, for example, with the Altos Series.

      2. A strict software convention for user processes that would (barring coding bugs) ensure they only access parts of memory they specifically "owned".

      Since POSIX is built on the older C-standard for heap usage (i.e. malloc/free), it would be possible to have user processes that cooperate on an 8086, through these API's, to guarantee they only access their own memory. Of course, bugs being a reality, this would not be as good as hardware protection of memory. Systems such as the Amiga and Macintosh (using the Motorola 68000) that used this strategy of software convention suffered with stability problems created by memory access bugs.






      share|improve this answer


















      • 1




        "A hardware implementation of memory protection outside of the 8086 microprocessor. I don't know of any systems that did this." ... I just went looking, on the basis that it seemed that it would be reasonably simple to implement such a system so figured somebody must have done it, and found this system (also described in Wikipedia here).
        – Jules
        Aug 8 at 20:54










      • @Jules Excellent find! I added a link to the answer.
        – Brian H
        Aug 8 at 21:05










      • You know you can just type in a memory-smashing x86 binary into vi right?
        – Joshua
        Aug 9 at 4:04






      • 1




        Oh. I misread the answer as implying you could have actual security by restricting what binaries can be run.
        – Joshua
        Aug 9 at 13:27






      • 1




        The Wikipedia page about Xenix also mentions Seattle Computer Products and Intel's "System 86" which seems to imply the later also had an MMU (Intel did sell special boards with MMU).
        – DarkDust
        Aug 10 at 7:59












      up vote
      23
      down vote










      up vote
      23
      down vote









      The short answer would be "No", since there is no way to prevent a user process from accessing privileged address space (of the OS or other processes) without some form of memory protection. Usually, this memory protection has to be implemented in hardware of the processor, such as you pointed out with 80286 protected mode.



      Some alternatives would be:



      1. A hardware implementation of memory protection outside of the 8086 microprocessor. This was done, for example, with the Altos Series.

      2. A strict software convention for user processes that would (barring coding bugs) ensure they only access parts of memory they specifically "owned".

      Since POSIX is built on the older C-standard for heap usage (i.e. malloc/free), it would be possible to have user processes that cooperate on an 8086, through these API's, to guarantee they only access their own memory. Of course, bugs being a reality, this would not be as good as hardware protection of memory. Systems such as the Amiga and Macintosh (using the Motorola 68000) that used this strategy of software convention suffered with stability problems created by memory access bugs.






      share|improve this answer














      The short answer would be "No", since there is no way to prevent a user process from accessing privileged address space (of the OS or other processes) without some form of memory protection. Usually, this memory protection has to be implemented in hardware of the processor, such as you pointed out with 80286 protected mode.



      Some alternatives would be:



      1. A hardware implementation of memory protection outside of the 8086 microprocessor. This was done, for example, with the Altos Series.

      2. A strict software convention for user processes that would (barring coding bugs) ensure they only access parts of memory they specifically "owned".

      Since POSIX is built on the older C-standard for heap usage (i.e. malloc/free), it would be possible to have user processes that cooperate on an 8086, through these API's, to guarantee they only access their own memory. Of course, bugs being a reality, this would not be as good as hardware protection of memory. Systems such as the Amiga and Macintosh (using the Motorola 68000) that used this strategy of software convention suffered with stability problems created by memory access bugs.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Aug 25 at 19:14

























      answered Aug 8 at 20:21









      Brian H

      13.9k49121




      13.9k49121







      • 1




        "A hardware implementation of memory protection outside of the 8086 microprocessor. I don't know of any systems that did this." ... I just went looking, on the basis that it seemed that it would be reasonably simple to implement such a system so figured somebody must have done it, and found this system (also described in Wikipedia here).
        – Jules
        Aug 8 at 20:54










      • @Jules Excellent find! I added a link to the answer.
        – Brian H
        Aug 8 at 21:05










      • You know you can just type in a memory-smashing x86 binary into vi right?
        – Joshua
        Aug 9 at 4:04






      • 1




        Oh. I misread the answer as implying you could have actual security by restricting what binaries can be run.
        – Joshua
        Aug 9 at 13:27






      • 1




        The Wikipedia page about Xenix also mentions Seattle Computer Products and Intel's "System 86" which seems to imply the later also had an MMU (Intel did sell special boards with MMU).
        – DarkDust
        Aug 10 at 7:59












      • 1




        "A hardware implementation of memory protection outside of the 8086 microprocessor. I don't know of any systems that did this." ... I just went looking, on the basis that it seemed that it would be reasonably simple to implement such a system so figured somebody must have done it, and found this system (also described in Wikipedia here).
        – Jules
        Aug 8 at 20:54










      • @Jules Excellent find! I added a link to the answer.
        – Brian H
        Aug 8 at 21:05










      • You know you can just type in a memory-smashing x86 binary into vi right?
        – Joshua
        Aug 9 at 4:04






      • 1




        Oh. I misread the answer as implying you could have actual security by restricting what binaries can be run.
        – Joshua
        Aug 9 at 13:27






      • 1




        The Wikipedia page about Xenix also mentions Seattle Computer Products and Intel's "System 86" which seems to imply the later also had an MMU (Intel did sell special boards with MMU).
        – DarkDust
        Aug 10 at 7:59







      1




      1




      "A hardware implementation of memory protection outside of the 8086 microprocessor. I don't know of any systems that did this." ... I just went looking, on the basis that it seemed that it would be reasonably simple to implement such a system so figured somebody must have done it, and found this system (also described in Wikipedia here).
      – Jules
      Aug 8 at 20:54




      "A hardware implementation of memory protection outside of the 8086 microprocessor. I don't know of any systems that did this." ... I just went looking, on the basis that it seemed that it would be reasonably simple to implement such a system so figured somebody must have done it, and found this system (also described in Wikipedia here).
      – Jules
      Aug 8 at 20:54












      @Jules Excellent find! I added a link to the answer.
      – Brian H
      Aug 8 at 21:05




      @Jules Excellent find! I added a link to the answer.
      – Brian H
      Aug 8 at 21:05












      You know you can just type in a memory-smashing x86 binary into vi right?
      – Joshua
      Aug 9 at 4:04




      You know you can just type in a memory-smashing x86 binary into vi right?
      – Joshua
      Aug 9 at 4:04




      1




      1




      Oh. I misread the answer as implying you could have actual security by restricting what binaries can be run.
      – Joshua
      Aug 9 at 13:27




      Oh. I misread the answer as implying you could have actual security by restricting what binaries can be run.
      – Joshua
      Aug 9 at 13:27




      1




      1




      The Wikipedia page about Xenix also mentions Seattle Computer Products and Intel's "System 86" which seems to imply the later also had an MMU (Intel did sell special boards with MMU).
      – DarkDust
      Aug 10 at 7:59




      The Wikipedia page about Xenix also mentions Seattle Computer Products and Intel's "System 86" which seems to imply the later also had an MMU (Intel did sell special boards with MMU).
      – DarkDust
      Aug 10 at 7:59










      up vote
      15
      down vote













      A computer using an 8086 can provide memory protection by using an external memory management unit. This would be a chip or a circuit that sits between the CPU and the memory and provides an additional layer of memory translation, sends interrupts if out-of-range memory is accessed, and so on. I don't know if this was commonly done on the 8086 (I've never seen such a system described, but then I've not looked for one either), but was very common for workstations based on early revisions of the 68000.



      (Edit: at least some systems were produced that used this approach, although as @RossRidge points out in the comments it was a little easier on the 68000 due to specific support designed into the processor, which is probably why it was more common there.)



      For an 80286, the standard 286 protected mode provides all the isolation that you'd need to run a POSIX compliant operating system with memory safety.



      (It wouldn't be a very good POSIX system, because memory allocations would need to be limited to 64K to fit inside segment limits, but POSIX allows for sizes to be limited as low as _POSIX_SSIZE_MAX, which is defined as 32KiB, so this is fine)



      There have been a number of Unix-like operating system that run on the 8086 and 80286, including Minix, which is usually considered the forerunner of Linux (it is the system that Torvalds used at the time he developed the first versions of Linux and influenced the early development quite a bit) and Xenix. There is also a port of Linux to 16-bit systems called ELKS, although I don't know whether it supports memory protection or not (looking at the source suggests it probably does, but I've never really done anything with it so can't be sure).






      share|improve this answer


















      • 2




        The 68000 had one feature that made MMUs work that the 8086 didn't have, separate user and supervisor modes. There's no way for an MMU on a 8086 to distinguish between memory accesses of different processes. I don't know if POSIX actually requires process isolation. MINIX ran on 8086s.
        – Ross Ridge
        Aug 8 at 20:29







      • 1




        @RossRidge - an 8086 MMU can easily distinguish the location of running code (the S1 and S0 lines are both low during code fetch cycles), so could identify privileged code from that. It could also use the same approach to prevent arbitrary jumps into privileged code. It'd be quite tricky, I agree, but not beyond possibility.
        – Jules
        Aug 8 at 20:32






      • 1




        (in fact, you'd probably set a specific location in memory that when fetched as executable code triggers a switch to supervisor mode; that'd be fairly simple to implement; then you'd vector system calls and interrupts through that point to make sure they ran as supervisor mode. Dammit... going to have build an 8086 system with memory protection now. As if I don't have enough projects!).
        – Jules
        Aug 8 at 20:37







      • 1




        @Jules: I think the best approach would probably be to make the bottom chunk of memory read-only in user tasks, and have any interrupt that occurs while in user state trigger an NMI for some number of cycles and then switch to supervisor state; the NMI handler would run from the initially-read-only chunk of storage at the bottom of the address space while it starts saving registers to a nearby chunk of storage that belongs to the task, but which it would be allowed to overwrite. If a user process leaves SS/SP in a garbage state, switching to the OS and back may execute code...
        – supercat
        Aug 8 at 21:49






      • 2




        @PeterI: A solution for that was to have a system with two 68000 processors, only one of which was allowed to run at any given time. A page fault would stall one CPU and wake up the other, which could then process it. Perhaps a similar approach could work using the 8086. Probably easier than any other approach for allowing tasks to have some of their memory swapped to disk out at any given time and having it "transparently" get swapped back in, but a system may be useful even without such abilities.
        – supercat
        Aug 9 at 18:24














      up vote
      15
      down vote













      A computer using an 8086 can provide memory protection by using an external memory management unit. This would be a chip or a circuit that sits between the CPU and the memory and provides an additional layer of memory translation, sends interrupts if out-of-range memory is accessed, and so on. I don't know if this was commonly done on the 8086 (I've never seen such a system described, but then I've not looked for one either), but was very common for workstations based on early revisions of the 68000.



      (Edit: at least some systems were produced that used this approach, although as @RossRidge points out in the comments it was a little easier on the 68000 due to specific support designed into the processor, which is probably why it was more common there.)



      For an 80286, the standard 286 protected mode provides all the isolation that you'd need to run a POSIX compliant operating system with memory safety.



      (It wouldn't be a very good POSIX system, because memory allocations would need to be limited to 64K to fit inside segment limits, but POSIX allows for sizes to be limited as low as _POSIX_SSIZE_MAX, which is defined as 32KiB, so this is fine)



      There have been a number of Unix-like operating system that run on the 8086 and 80286, including Minix, which is usually considered the forerunner of Linux (it is the system that Torvalds used at the time he developed the first versions of Linux and influenced the early development quite a bit) and Xenix. There is also a port of Linux to 16-bit systems called ELKS, although I don't know whether it supports memory protection or not (looking at the source suggests it probably does, but I've never really done anything with it so can't be sure).






      share|improve this answer


















      • 2




        The 68000 had one feature that made MMUs work that the 8086 didn't have, separate user and supervisor modes. There's no way for an MMU on a 8086 to distinguish between memory accesses of different processes. I don't know if POSIX actually requires process isolation. MINIX ran on 8086s.
        – Ross Ridge
        Aug 8 at 20:29







      • 1




        @RossRidge - an 8086 MMU can easily distinguish the location of running code (the S1 and S0 lines are both low during code fetch cycles), so could identify privileged code from that. It could also use the same approach to prevent arbitrary jumps into privileged code. It'd be quite tricky, I agree, but not beyond possibility.
        – Jules
        Aug 8 at 20:32






      • 1




        (in fact, you'd probably set a specific location in memory that when fetched as executable code triggers a switch to supervisor mode; that'd be fairly simple to implement; then you'd vector system calls and interrupts through that point to make sure they ran as supervisor mode. Dammit... going to have build an 8086 system with memory protection now. As if I don't have enough projects!).
        – Jules
        Aug 8 at 20:37







      • 1




        @Jules: I think the best approach would probably be to make the bottom chunk of memory read-only in user tasks, and have any interrupt that occurs while in user state trigger an NMI for some number of cycles and then switch to supervisor state; the NMI handler would run from the initially-read-only chunk of storage at the bottom of the address space while it starts saving registers to a nearby chunk of storage that belongs to the task, but which it would be allowed to overwrite. If a user process leaves SS/SP in a garbage state, switching to the OS and back may execute code...
        – supercat
        Aug 8 at 21:49






      • 2




        @PeterI: A solution for that was to have a system with two 68000 processors, only one of which was allowed to run at any given time. A page fault would stall one CPU and wake up the other, which could then process it. Perhaps a similar approach could work using the 8086. Probably easier than any other approach for allowing tasks to have some of their memory swapped to disk out at any given time and having it "transparently" get swapped back in, but a system may be useful even without such abilities.
        – supercat
        Aug 9 at 18:24












      up vote
      15
      down vote










      up vote
      15
      down vote









      A computer using an 8086 can provide memory protection by using an external memory management unit. This would be a chip or a circuit that sits between the CPU and the memory and provides an additional layer of memory translation, sends interrupts if out-of-range memory is accessed, and so on. I don't know if this was commonly done on the 8086 (I've never seen such a system described, but then I've not looked for one either), but was very common for workstations based on early revisions of the 68000.



      (Edit: at least some systems were produced that used this approach, although as @RossRidge points out in the comments it was a little easier on the 68000 due to specific support designed into the processor, which is probably why it was more common there.)



      For an 80286, the standard 286 protected mode provides all the isolation that you'd need to run a POSIX compliant operating system with memory safety.



      (It wouldn't be a very good POSIX system, because memory allocations would need to be limited to 64K to fit inside segment limits, but POSIX allows for sizes to be limited as low as _POSIX_SSIZE_MAX, which is defined as 32KiB, so this is fine)



      There have been a number of Unix-like operating system that run on the 8086 and 80286, including Minix, which is usually considered the forerunner of Linux (it is the system that Torvalds used at the time he developed the first versions of Linux and influenced the early development quite a bit) and Xenix. There is also a port of Linux to 16-bit systems called ELKS, although I don't know whether it supports memory protection or not (looking at the source suggests it probably does, but I've never really done anything with it so can't be sure).






      share|improve this answer














      A computer using an 8086 can provide memory protection by using an external memory management unit. This would be a chip or a circuit that sits between the CPU and the memory and provides an additional layer of memory translation, sends interrupts if out-of-range memory is accessed, and so on. I don't know if this was commonly done on the 8086 (I've never seen such a system described, but then I've not looked for one either), but was very common for workstations based on early revisions of the 68000.



      (Edit: at least some systems were produced that used this approach, although as @RossRidge points out in the comments it was a little easier on the 68000 due to specific support designed into the processor, which is probably why it was more common there.)



      For an 80286, the standard 286 protected mode provides all the isolation that you'd need to run a POSIX compliant operating system with memory safety.



      (It wouldn't be a very good POSIX system, because memory allocations would need to be limited to 64K to fit inside segment limits, but POSIX allows for sizes to be limited as low as _POSIX_SSIZE_MAX, which is defined as 32KiB, so this is fine)



      There have been a number of Unix-like operating system that run on the 8086 and 80286, including Minix, which is usually considered the forerunner of Linux (it is the system that Torvalds used at the time he developed the first versions of Linux and influenced the early development quite a bit) and Xenix. There is also a port of Linux to 16-bit systems called ELKS, although I don't know whether it supports memory protection or not (looking at the source suggests it probably does, but I've never really done anything with it so can't be sure).







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Aug 8 at 21:00

























      answered Aug 8 at 20:18









      Jules

      6,90111836




      6,90111836







      • 2




        The 68000 had one feature that made MMUs work that the 8086 didn't have, separate user and supervisor modes. There's no way for an MMU on a 8086 to distinguish between memory accesses of different processes. I don't know if POSIX actually requires process isolation. MINIX ran on 8086s.
        – Ross Ridge
        Aug 8 at 20:29







      • 1




        @RossRidge - an 8086 MMU can easily distinguish the location of running code (the S1 and S0 lines are both low during code fetch cycles), so could identify privileged code from that. It could also use the same approach to prevent arbitrary jumps into privileged code. It'd be quite tricky, I agree, but not beyond possibility.
        – Jules
        Aug 8 at 20:32






      • 1




        (in fact, you'd probably set a specific location in memory that when fetched as executable code triggers a switch to supervisor mode; that'd be fairly simple to implement; then you'd vector system calls and interrupts through that point to make sure they ran as supervisor mode. Dammit... going to have build an 8086 system with memory protection now. As if I don't have enough projects!).
        – Jules
        Aug 8 at 20:37







      • 1




        @Jules: I think the best approach would probably be to make the bottom chunk of memory read-only in user tasks, and have any interrupt that occurs while in user state trigger an NMI for some number of cycles and then switch to supervisor state; the NMI handler would run from the initially-read-only chunk of storage at the bottom of the address space while it starts saving registers to a nearby chunk of storage that belongs to the task, but which it would be allowed to overwrite. If a user process leaves SS/SP in a garbage state, switching to the OS and back may execute code...
        – supercat
        Aug 8 at 21:49






      • 2




        @PeterI: A solution for that was to have a system with two 68000 processors, only one of which was allowed to run at any given time. A page fault would stall one CPU and wake up the other, which could then process it. Perhaps a similar approach could work using the 8086. Probably easier than any other approach for allowing tasks to have some of their memory swapped to disk out at any given time and having it "transparently" get swapped back in, but a system may be useful even without such abilities.
        – supercat
        Aug 9 at 18:24












      • 2




        The 68000 had one feature that made MMUs work that the 8086 didn't have, separate user and supervisor modes. There's no way for an MMU on a 8086 to distinguish between memory accesses of different processes. I don't know if POSIX actually requires process isolation. MINIX ran on 8086s.
        – Ross Ridge
        Aug 8 at 20:29







      • 1




        @RossRidge - an 8086 MMU can easily distinguish the location of running code (the S1 and S0 lines are both low during code fetch cycles), so could identify privileged code from that. It could also use the same approach to prevent arbitrary jumps into privileged code. It'd be quite tricky, I agree, but not beyond possibility.
        – Jules
        Aug 8 at 20:32






      • 1




        (in fact, you'd probably set a specific location in memory that when fetched as executable code triggers a switch to supervisor mode; that'd be fairly simple to implement; then you'd vector system calls and interrupts through that point to make sure they ran as supervisor mode. Dammit... going to have build an 8086 system with memory protection now. As if I don't have enough projects!).
        – Jules
        Aug 8 at 20:37







      • 1




        @Jules: I think the best approach would probably be to make the bottom chunk of memory read-only in user tasks, and have any interrupt that occurs while in user state trigger an NMI for some number of cycles and then switch to supervisor state; the NMI handler would run from the initially-read-only chunk of storage at the bottom of the address space while it starts saving registers to a nearby chunk of storage that belongs to the task, but which it would be allowed to overwrite. If a user process leaves SS/SP in a garbage state, switching to the OS and back may execute code...
        – supercat
        Aug 8 at 21:49






      • 2




        @PeterI: A solution for that was to have a system with two 68000 processors, only one of which was allowed to run at any given time. A page fault would stall one CPU and wake up the other, which could then process it. Perhaps a similar approach could work using the 8086. Probably easier than any other approach for allowing tasks to have some of their memory swapped to disk out at any given time and having it "transparently" get swapped back in, but a system may be useful even without such abilities.
        – supercat
        Aug 9 at 18:24







      2




      2




      The 68000 had one feature that made MMUs work that the 8086 didn't have, separate user and supervisor modes. There's no way for an MMU on a 8086 to distinguish between memory accesses of different processes. I don't know if POSIX actually requires process isolation. MINIX ran on 8086s.
      – Ross Ridge
      Aug 8 at 20:29





      The 68000 had one feature that made MMUs work that the 8086 didn't have, separate user and supervisor modes. There's no way for an MMU on a 8086 to distinguish between memory accesses of different processes. I don't know if POSIX actually requires process isolation. MINIX ran on 8086s.
      – Ross Ridge
      Aug 8 at 20:29





      1




      1




      @RossRidge - an 8086 MMU can easily distinguish the location of running code (the S1 and S0 lines are both low during code fetch cycles), so could identify privileged code from that. It could also use the same approach to prevent arbitrary jumps into privileged code. It'd be quite tricky, I agree, but not beyond possibility.
      – Jules
      Aug 8 at 20:32




      @RossRidge - an 8086 MMU can easily distinguish the location of running code (the S1 and S0 lines are both low during code fetch cycles), so could identify privileged code from that. It could also use the same approach to prevent arbitrary jumps into privileged code. It'd be quite tricky, I agree, but not beyond possibility.
      – Jules
      Aug 8 at 20:32




      1




      1




      (in fact, you'd probably set a specific location in memory that when fetched as executable code triggers a switch to supervisor mode; that'd be fairly simple to implement; then you'd vector system calls and interrupts through that point to make sure they ran as supervisor mode. Dammit... going to have build an 8086 system with memory protection now. As if I don't have enough projects!).
      – Jules
      Aug 8 at 20:37





      (in fact, you'd probably set a specific location in memory that when fetched as executable code triggers a switch to supervisor mode; that'd be fairly simple to implement; then you'd vector system calls and interrupts through that point to make sure they ran as supervisor mode. Dammit... going to have build an 8086 system with memory protection now. As if I don't have enough projects!).
      – Jules
      Aug 8 at 20:37





      1




      1




      @Jules: I think the best approach would probably be to make the bottom chunk of memory read-only in user tasks, and have any interrupt that occurs while in user state trigger an NMI for some number of cycles and then switch to supervisor state; the NMI handler would run from the initially-read-only chunk of storage at the bottom of the address space while it starts saving registers to a nearby chunk of storage that belongs to the task, but which it would be allowed to overwrite. If a user process leaves SS/SP in a garbage state, switching to the OS and back may execute code...
      – supercat
      Aug 8 at 21:49




      @Jules: I think the best approach would probably be to make the bottom chunk of memory read-only in user tasks, and have any interrupt that occurs while in user state trigger an NMI for some number of cycles and then switch to supervisor state; the NMI handler would run from the initially-read-only chunk of storage at the bottom of the address space while it starts saving registers to a nearby chunk of storage that belongs to the task, but which it would be allowed to overwrite. If a user process leaves SS/SP in a garbage state, switching to the OS and back may execute code...
      – supercat
      Aug 8 at 21:49




      2




      2




      @PeterI: A solution for that was to have a system with two 68000 processors, only one of which was allowed to run at any given time. A page fault would stall one CPU and wake up the other, which could then process it. Perhaps a similar approach could work using the 8086. Probably easier than any other approach for allowing tasks to have some of their memory swapped to disk out at any given time and having it "transparently" get swapped back in, but a system may be useful even without such abilities.
      – supercat
      Aug 9 at 18:24




      @PeterI: A solution for that was to have a system with two 68000 processors, only one of which was allowed to run at any given time. A page fault would stall one CPU and wake up the other, which could then process it. Perhaps a similar approach could work using the 8086. Probably easier than any other approach for allowing tasks to have some of their memory swapped to disk out at any given time and having it "transparently" get swapped back in, but a system may be useful even without such abilities.
      – supercat
      Aug 9 at 18:24










      up vote
      6
      down vote













      Technically yes because the 8086 instruction set is Turing-complete. Here is Linux running very slowly on an ARMv5 emulation on an 8-bit RISC microcontroller (also mentioned here). But if you want process isolation, I would look for other, less extreme solutions first!






      share|improve this answer
















      • 2




        Ah, the Turing tarpit argument. Nice. :)
        – Jules
        Aug 8 at 22:25










      • @Jules For completeness if nothing else.
        – traal
        Aug 8 at 23:36






      • 3




        No, since 8086 is a finite-state machine, not being Turing-complete. Strictly speaking, of course. ;-)
        – user49915
        Aug 9 at 0:09







      • 1




        The argument "it's Turing-complete, so everything can be done" is not even true: Turing-completeness does not even say this: A Turing-complete programming language for example is not necessarily able to access files on the hard disk (counter-example: Brainfuck).
        – Martin Rosenau
        Aug 9 at 5:25






      • 1




        @MartinRosenau That's a fundamental misunderstanding of Turing completeness. For a system to be Turing complete, it only needs to be able to compute anything. It doesn't need any other capabilities.
        – duskwuff
        Aug 9 at 17:55














      up vote
      6
      down vote













      Technically yes because the 8086 instruction set is Turing-complete. Here is Linux running very slowly on an ARMv5 emulation on an 8-bit RISC microcontroller (also mentioned here). But if you want process isolation, I would look for other, less extreme solutions first!






      share|improve this answer
















      • 2




        Ah, the Turing tarpit argument. Nice. :)
        – Jules
        Aug 8 at 22:25










      • @Jules For completeness if nothing else.
        – traal
        Aug 8 at 23:36






      • 3




        No, since 8086 is a finite-state machine, not being Turing-complete. Strictly speaking, of course. ;-)
        – user49915
        Aug 9 at 0:09







      • 1




        The argument "it's Turing-complete, so everything can be done" is not even true: Turing-completeness does not even say this: A Turing-complete programming language for example is not necessarily able to access files on the hard disk (counter-example: Brainfuck).
        – Martin Rosenau
        Aug 9 at 5:25






      • 1




        @MartinRosenau That's a fundamental misunderstanding of Turing completeness. For a system to be Turing complete, it only needs to be able to compute anything. It doesn't need any other capabilities.
        – duskwuff
        Aug 9 at 17:55












      up vote
      6
      down vote










      up vote
      6
      down vote









      Technically yes because the 8086 instruction set is Turing-complete. Here is Linux running very slowly on an ARMv5 emulation on an 8-bit RISC microcontroller (also mentioned here). But if you want process isolation, I would look for other, less extreme solutions first!






      share|improve this answer












      Technically yes because the 8086 instruction set is Turing-complete. Here is Linux running very slowly on an ARMv5 emulation on an 8-bit RISC microcontroller (also mentioned here). But if you want process isolation, I would look for other, less extreme solutions first!







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Aug 8 at 21:54









      traal

      7,06712259




      7,06712259







      • 2




        Ah, the Turing tarpit argument. Nice. :)
        – Jules
        Aug 8 at 22:25










      • @Jules For completeness if nothing else.
        – traal
        Aug 8 at 23:36






      • 3




        No, since 8086 is a finite-state machine, not being Turing-complete. Strictly speaking, of course. ;-)
        – user49915
        Aug 9 at 0:09







      • 1




        The argument "it's Turing-complete, so everything can be done" is not even true: Turing-completeness does not even say this: A Turing-complete programming language for example is not necessarily able to access files on the hard disk (counter-example: Brainfuck).
        – Martin Rosenau
        Aug 9 at 5:25






      • 1




        @MartinRosenau That's a fundamental misunderstanding of Turing completeness. For a system to be Turing complete, it only needs to be able to compute anything. It doesn't need any other capabilities.
        – duskwuff
        Aug 9 at 17:55












      • 2




        Ah, the Turing tarpit argument. Nice. :)
        – Jules
        Aug 8 at 22:25










      • @Jules For completeness if nothing else.
        – traal
        Aug 8 at 23:36






      • 3




        No, since 8086 is a finite-state machine, not being Turing-complete. Strictly speaking, of course. ;-)
        – user49915
        Aug 9 at 0:09







      • 1




        The argument "it's Turing-complete, so everything can be done" is not even true: Turing-completeness does not even say this: A Turing-complete programming language for example is not necessarily able to access files on the hard disk (counter-example: Brainfuck).
        – Martin Rosenau
        Aug 9 at 5:25






      • 1




        @MartinRosenau That's a fundamental misunderstanding of Turing completeness. For a system to be Turing complete, it only needs to be able to compute anything. It doesn't need any other capabilities.
        – duskwuff
        Aug 9 at 17:55







      2




      2




      Ah, the Turing tarpit argument. Nice. :)
      – Jules
      Aug 8 at 22:25




      Ah, the Turing tarpit argument. Nice. :)
      – Jules
      Aug 8 at 22:25












      @Jules For completeness if nothing else.
      – traal
      Aug 8 at 23:36




      @Jules For completeness if nothing else.
      – traal
      Aug 8 at 23:36




      3




      3




      No, since 8086 is a finite-state machine, not being Turing-complete. Strictly speaking, of course. ;-)
      – user49915
      Aug 9 at 0:09





      No, since 8086 is a finite-state machine, not being Turing-complete. Strictly speaking, of course. ;-)
      – user49915
      Aug 9 at 0:09





      1




      1




      The argument "it's Turing-complete, so everything can be done" is not even true: Turing-completeness does not even say this: A Turing-complete programming language for example is not necessarily able to access files on the hard disk (counter-example: Brainfuck).
      – Martin Rosenau
      Aug 9 at 5:25




      The argument "it's Turing-complete, so everything can be done" is not even true: Turing-completeness does not even say this: A Turing-complete programming language for example is not necessarily able to access files on the hard disk (counter-example: Brainfuck).
      – Martin Rosenau
      Aug 9 at 5:25




      1




      1




      @MartinRosenau That's a fundamental misunderstanding of Turing completeness. For a system to be Turing complete, it only needs to be able to compute anything. It doesn't need any other capabilities.
      – duskwuff
      Aug 9 at 17:55




      @MartinRosenau That's a fundamental misunderstanding of Turing completeness. For a system to be Turing complete, it only needs to be able to compute anything. It doesn't need any other capabilities.
      – duskwuff
      Aug 9 at 17:55










      up vote
      4
      down vote













      I would say no. Even if external memory protection hardware is added, the processor lacks the concept of a user and supervisor or privileged state. As a result, there's no way to stop a program from disabling interrupts or accessing I/O ports, like those in the MMU.



      Now if we set aside the need to isolate a malicious program, the MMU might be enough. Only problem with that thinking is that buggy programs can be pretty malicious even if the author is not.



      So it would seem that we are back to no!



      If you want process isolation, you need something more modern and a well written operating kernel.






      share|improve this answer
















      • 1




        An 8086 MMU would have to track whether code executing was privileged or not by itself, but that would likely not be hard. It could prevent access to IO ports simply by not allowing the processor to use them in user mode; similarly, disabling interrupts isn't actually a huge issue: it could use an NMI instead of a regular interrupt; privileged code could disable and enable the use of NMI by communicating with the MMU rather than using the internal processor interrupt controls. Messy, but workable.
        – Jules
        Aug 8 at 21:08






      • 2




        To clarify, the 8086 does not have a user mode, so there's no way for the mmu to know what privilege level applies. Further, the vast majority of peripherals will simply not work using the NMI. Plus there are times (like critical sections) where it is essential to mask interrupts. The best answer I have seen would be to run the code in some sort of virtual machine which would enforce the needed rules. At that point I don't see the point of using an 8086. One might as well run the emulator in a browser using JavaScript. It would run a lot quicker!
        – Peter Camilleri
        Aug 9 at 0:33






      • 1




        there may not be a processor supported user/supervisor mode switch, but there's no reason an MMU can't implement one itself, switching to supervisor mode (for example) either on a hardware interrupt or when execution reaches a certain point in the code (which can be easily determined because the 8086 has status lines that indicate whether a read operation is for data or code). Masking interrupts is not the only way to provide critical sections, either (e.g. the LOCK# signal could be used to provide atomic operations by causing the MMU to forcible delay any pending interrupt)
        – Jules
        Aug 9 at 7:22






      • 1




        I'm also not sure I understand what you mean by "the vast majority of peripherals will simply not work using the NMI" -- I don't see how the peripheral would care (or even identify) which of the two possible methods is used to identify to the processor that attention is required. An NMI and an external register used to indicate which IRQ line caused the interrupt is entirely equivalent, as far as I can see, to the standard INTR line behaviour. Other than not signalling the INTA line, and only providing a single vector, the behaviour of the NMI line on the 8086 is exactly equivalent to INTR...
        – Jules
        Aug 9 at 7:31







      • 1




        ... and both of those can be trivially emulated by a simple piece of hardware and a small addition to the operating system code.
        – Jules
        Aug 9 at 7:34














      up vote
      4
      down vote













      I would say no. Even if external memory protection hardware is added, the processor lacks the concept of a user and supervisor or privileged state. As a result, there's no way to stop a program from disabling interrupts or accessing I/O ports, like those in the MMU.



      Now if we set aside the need to isolate a malicious program, the MMU might be enough. Only problem with that thinking is that buggy programs can be pretty malicious even if the author is not.



      So it would seem that we are back to no!



      If you want process isolation, you need something more modern and a well written operating kernel.






      share|improve this answer
















      • 1




        An 8086 MMU would have to track whether code executing was privileged or not by itself, but that would likely not be hard. It could prevent access to IO ports simply by not allowing the processor to use them in user mode; similarly, disabling interrupts isn't actually a huge issue: it could use an NMI instead of a regular interrupt; privileged code could disable and enable the use of NMI by communicating with the MMU rather than using the internal processor interrupt controls. Messy, but workable.
        – Jules
        Aug 8 at 21:08






      • 2




        To clarify, the 8086 does not have a user mode, so there's no way for the mmu to know what privilege level applies. Further, the vast majority of peripherals will simply not work using the NMI. Plus there are times (like critical sections) where it is essential to mask interrupts. The best answer I have seen would be to run the code in some sort of virtual machine which would enforce the needed rules. At that point I don't see the point of using an 8086. One might as well run the emulator in a browser using JavaScript. It would run a lot quicker!
        – Peter Camilleri
        Aug 9 at 0:33






      • 1




        there may not be a processor supported user/supervisor mode switch, but there's no reason an MMU can't implement one itself, switching to supervisor mode (for example) either on a hardware interrupt or when execution reaches a certain point in the code (which can be easily determined because the 8086 has status lines that indicate whether a read operation is for data or code). Masking interrupts is not the only way to provide critical sections, either (e.g. the LOCK# signal could be used to provide atomic operations by causing the MMU to forcible delay any pending interrupt)
        – Jules
        Aug 9 at 7:22






      • 1




        I'm also not sure I understand what you mean by "the vast majority of peripherals will simply not work using the NMI" -- I don't see how the peripheral would care (or even identify) which of the two possible methods is used to identify to the processor that attention is required. An NMI and an external register used to indicate which IRQ line caused the interrupt is entirely equivalent, as far as I can see, to the standard INTR line behaviour. Other than not signalling the INTA line, and only providing a single vector, the behaviour of the NMI line on the 8086 is exactly equivalent to INTR...
        – Jules
        Aug 9 at 7:31







      • 1




        ... and both of those can be trivially emulated by a simple piece of hardware and a small addition to the operating system code.
        – Jules
        Aug 9 at 7:34












      up vote
      4
      down vote










      up vote
      4
      down vote









      I would say no. Even if external memory protection hardware is added, the processor lacks the concept of a user and supervisor or privileged state. As a result, there's no way to stop a program from disabling interrupts or accessing I/O ports, like those in the MMU.



      Now if we set aside the need to isolate a malicious program, the MMU might be enough. Only problem with that thinking is that buggy programs can be pretty malicious even if the author is not.



      So it would seem that we are back to no!



      If you want process isolation, you need something more modern and a well written operating kernel.






      share|improve this answer












      I would say no. Even if external memory protection hardware is added, the processor lacks the concept of a user and supervisor or privileged state. As a result, there's no way to stop a program from disabling interrupts or accessing I/O ports, like those in the MMU.



      Now if we set aside the need to isolate a malicious program, the MMU might be enough. Only problem with that thinking is that buggy programs can be pretty malicious even if the author is not.



      So it would seem that we are back to no!



      If you want process isolation, you need something more modern and a well written operating kernel.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Aug 8 at 20:53









      Peter Camilleri

      54628




      54628







      • 1




        An 8086 MMU would have to track whether code executing was privileged or not by itself, but that would likely not be hard. It could prevent access to IO ports simply by not allowing the processor to use them in user mode; similarly, disabling interrupts isn't actually a huge issue: it could use an NMI instead of a regular interrupt; privileged code could disable and enable the use of NMI by communicating with the MMU rather than using the internal processor interrupt controls. Messy, but workable.
        – Jules
        Aug 8 at 21:08






      • 2




        To clarify, the 8086 does not have a user mode, so there's no way for the mmu to know what privilege level applies. Further, the vast majority of peripherals will simply not work using the NMI. Plus there are times (like critical sections) where it is essential to mask interrupts. The best answer I have seen would be to run the code in some sort of virtual machine which would enforce the needed rules. At that point I don't see the point of using an 8086. One might as well run the emulator in a browser using JavaScript. It would run a lot quicker!
        – Peter Camilleri
        Aug 9 at 0:33






      • 1




        there may not be a processor supported user/supervisor mode switch, but there's no reason an MMU can't implement one itself, switching to supervisor mode (for example) either on a hardware interrupt or when execution reaches a certain point in the code (which can be easily determined because the 8086 has status lines that indicate whether a read operation is for data or code). Masking interrupts is not the only way to provide critical sections, either (e.g. the LOCK# signal could be used to provide atomic operations by causing the MMU to forcible delay any pending interrupt)
        – Jules
        Aug 9 at 7:22






      • 1




        I'm also not sure I understand what you mean by "the vast majority of peripherals will simply not work using the NMI" -- I don't see how the peripheral would care (or even identify) which of the two possible methods is used to identify to the processor that attention is required. An NMI and an external register used to indicate which IRQ line caused the interrupt is entirely equivalent, as far as I can see, to the standard INTR line behaviour. Other than not signalling the INTA line, and only providing a single vector, the behaviour of the NMI line on the 8086 is exactly equivalent to INTR...
        – Jules
        Aug 9 at 7:31







      • 1




        ... and both of those can be trivially emulated by a simple piece of hardware and a small addition to the operating system code.
        – Jules
        Aug 9 at 7:34












      • 1




        An 8086 MMU would have to track whether code executing was privileged or not by itself, but that would likely not be hard. It could prevent access to IO ports simply by not allowing the processor to use them in user mode; similarly, disabling interrupts isn't actually a huge issue: it could use an NMI instead of a regular interrupt; privileged code could disable and enable the use of NMI by communicating with the MMU rather than using the internal processor interrupt controls. Messy, but workable.
        – Jules
        Aug 8 at 21:08






      • 2




        To clarify, the 8086 does not have a user mode, so there's no way for the mmu to know what privilege level applies. Further, the vast majority of peripherals will simply not work using the NMI. Plus there are times (like critical sections) where it is essential to mask interrupts. The best answer I have seen would be to run the code in some sort of virtual machine which would enforce the needed rules. At that point I don't see the point of using an 8086. One might as well run the emulator in a browser using JavaScript. It would run a lot quicker!
        – Peter Camilleri
        Aug 9 at 0:33






      • 1




        there may not be a processor supported user/supervisor mode switch, but there's no reason an MMU can't implement one itself, switching to supervisor mode (for example) either on a hardware interrupt or when execution reaches a certain point in the code (which can be easily determined because the 8086 has status lines that indicate whether a read operation is for data or code). Masking interrupts is not the only way to provide critical sections, either (e.g. the LOCK# signal could be used to provide atomic operations by causing the MMU to forcible delay any pending interrupt)
        – Jules
        Aug 9 at 7:22






      • 1




        I'm also not sure I understand what you mean by "the vast majority of peripherals will simply not work using the NMI" -- I don't see how the peripheral would care (or even identify) which of the two possible methods is used to identify to the processor that attention is required. An NMI and an external register used to indicate which IRQ line caused the interrupt is entirely equivalent, as far as I can see, to the standard INTR line behaviour. Other than not signalling the INTA line, and only providing a single vector, the behaviour of the NMI line on the 8086 is exactly equivalent to INTR...
        – Jules
        Aug 9 at 7:31







      • 1




        ... and both of those can be trivially emulated by a simple piece of hardware and a small addition to the operating system code.
        – Jules
        Aug 9 at 7:34







      1




      1




      An 8086 MMU would have to track whether code executing was privileged or not by itself, but that would likely not be hard. It could prevent access to IO ports simply by not allowing the processor to use them in user mode; similarly, disabling interrupts isn't actually a huge issue: it could use an NMI instead of a regular interrupt; privileged code could disable and enable the use of NMI by communicating with the MMU rather than using the internal processor interrupt controls. Messy, but workable.
      – Jules
      Aug 8 at 21:08




      An 8086 MMU would have to track whether code executing was privileged or not by itself, but that would likely not be hard. It could prevent access to IO ports simply by not allowing the processor to use them in user mode; similarly, disabling interrupts isn't actually a huge issue: it could use an NMI instead of a regular interrupt; privileged code could disable and enable the use of NMI by communicating with the MMU rather than using the internal processor interrupt controls. Messy, but workable.
      – Jules
      Aug 8 at 21:08




      2




      2




      To clarify, the 8086 does not have a user mode, so there's no way for the mmu to know what privilege level applies. Further, the vast majority of peripherals will simply not work using the NMI. Plus there are times (like critical sections) where it is essential to mask interrupts. The best answer I have seen would be to run the code in some sort of virtual machine which would enforce the needed rules. At that point I don't see the point of using an 8086. One might as well run the emulator in a browser using JavaScript. It would run a lot quicker!
      – Peter Camilleri
      Aug 9 at 0:33




      To clarify, the 8086 does not have a user mode, so there's no way for the mmu to know what privilege level applies. Further, the vast majority of peripherals will simply not work using the NMI. Plus there are times (like critical sections) where it is essential to mask interrupts. The best answer I have seen would be to run the code in some sort of virtual machine which would enforce the needed rules. At that point I don't see the point of using an 8086. One might as well run the emulator in a browser using JavaScript. It would run a lot quicker!
      – Peter Camilleri
      Aug 9 at 0:33




      1




      1




      there may not be a processor supported user/supervisor mode switch, but there's no reason an MMU can't implement one itself, switching to supervisor mode (for example) either on a hardware interrupt or when execution reaches a certain point in the code (which can be easily determined because the 8086 has status lines that indicate whether a read operation is for data or code). Masking interrupts is not the only way to provide critical sections, either (e.g. the LOCK# signal could be used to provide atomic operations by causing the MMU to forcible delay any pending interrupt)
      – Jules
      Aug 9 at 7:22




      there may not be a processor supported user/supervisor mode switch, but there's no reason an MMU can't implement one itself, switching to supervisor mode (for example) either on a hardware interrupt or when execution reaches a certain point in the code (which can be easily determined because the 8086 has status lines that indicate whether a read operation is for data or code). Masking interrupts is not the only way to provide critical sections, either (e.g. the LOCK# signal could be used to provide atomic operations by causing the MMU to forcible delay any pending interrupt)
      – Jules
      Aug 9 at 7:22




      1




      1




      I'm also not sure I understand what you mean by "the vast majority of peripherals will simply not work using the NMI" -- I don't see how the peripheral would care (or even identify) which of the two possible methods is used to identify to the processor that attention is required. An NMI and an external register used to indicate which IRQ line caused the interrupt is entirely equivalent, as far as I can see, to the standard INTR line behaviour. Other than not signalling the INTA line, and only providing a single vector, the behaviour of the NMI line on the 8086 is exactly equivalent to INTR...
      – Jules
      Aug 9 at 7:31





      I'm also not sure I understand what you mean by "the vast majority of peripherals will simply not work using the NMI" -- I don't see how the peripheral would care (or even identify) which of the two possible methods is used to identify to the processor that attention is required. An NMI and an external register used to indicate which IRQ line caused the interrupt is entirely equivalent, as far as I can see, to the standard INTR line behaviour. Other than not signalling the INTA line, and only providing a single vector, the behaviour of the NMI line on the 8086 is exactly equivalent to INTR...
      – Jules
      Aug 9 at 7:31





      1




      1




      ... and both of those can be trivially emulated by a simple piece of hardware and a small addition to the operating system code.
      – Jules
      Aug 9 at 7:34




      ... and both of those can be trivially emulated by a simple piece of hardware and a small addition to the operating system code.
      – Jules
      Aug 9 at 7:34










      up vote
      3
      down vote













      The 8086,80186,8088 and 80188 all lacked any real memory protection although switching segment registers would protect against accidental overwriting. The 80286 did support protection so a POSIX OS with hardware enforced memory protection could be written.



      The NEC V20 and V30 were 8086 clones with a 8080 emulation mode. Since it wouldn't be able to address more than 64K in 8080 mode one could presumably write a POSIX OS where the kernel ran in 8086 mode while userspace ran in 8080 mode switching between them to make system calls. Presumably it would still be possible to address the first 256 I/O ports directly which would mean user process could talk directly to some fairly important hardware if the V20/V30 were embedded in a standard IBM PC clone.






      share|improve this answer


























        up vote
        3
        down vote













        The 8086,80186,8088 and 80188 all lacked any real memory protection although switching segment registers would protect against accidental overwriting. The 80286 did support protection so a POSIX OS with hardware enforced memory protection could be written.



        The NEC V20 and V30 were 8086 clones with a 8080 emulation mode. Since it wouldn't be able to address more than 64K in 8080 mode one could presumably write a POSIX OS where the kernel ran in 8086 mode while userspace ran in 8080 mode switching between them to make system calls. Presumably it would still be possible to address the first 256 I/O ports directly which would mean user process could talk directly to some fairly important hardware if the V20/V30 were embedded in a standard IBM PC clone.






        share|improve this answer
























          up vote
          3
          down vote










          up vote
          3
          down vote









          The 8086,80186,8088 and 80188 all lacked any real memory protection although switching segment registers would protect against accidental overwriting. The 80286 did support protection so a POSIX OS with hardware enforced memory protection could be written.



          The NEC V20 and V30 were 8086 clones with a 8080 emulation mode. Since it wouldn't be able to address more than 64K in 8080 mode one could presumably write a POSIX OS where the kernel ran in 8086 mode while userspace ran in 8080 mode switching between them to make system calls. Presumably it would still be possible to address the first 256 I/O ports directly which would mean user process could talk directly to some fairly important hardware if the V20/V30 were embedded in a standard IBM PC clone.






          share|improve this answer














          The 8086,80186,8088 and 80188 all lacked any real memory protection although switching segment registers would protect against accidental overwriting. The 80286 did support protection so a POSIX OS with hardware enforced memory protection could be written.



          The NEC V20 and V30 were 8086 clones with a 8080 emulation mode. Since it wouldn't be able to address more than 64K in 8080 mode one could presumably write a POSIX OS where the kernel ran in 8086 mode while userspace ran in 8080 mode switching between them to make system calls. Presumably it would still be possible to address the first 256 I/O ports directly which would mean user process could talk directly to some fairly important hardware if the V20/V30 were embedded in a standard IBM PC clone.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Aug 9 at 16:56









          bjb

          4,5401155




          4,5401155










          answered Aug 9 at 14:02









          William Hay

          1312




          1312




















              up vote
              2
              down vote













              The 8086 is a 16-bit processor. One possibility for implementing some form of process isolation is to use the processor's segment registers (CS, DS, SS, ES). These allow a process's stack (SS), heap (DS, ES), and code (CS) reside in specific 64kB areas of a 1MB address space. This works by left shifting the 16-bit segment register by four bits and adding to that the 16-bit stack pointer (SS << 4 + SP), instruction pointer (CS << 4 + IP), or data address (e.g. CS << 4 + SI), to obtain the 20 bits of the physical address.



              Thus, through a suitable segment register setup one can isolate a process to at most 64kB, provided the process follows the convention of not altering the segment registers. For the requirements of C programs, where the heap and the stack must be addressable through the same 16-bit pointers, this convention restricts them to 64kB of data and 64kB of code. Although this might sound overly restrictive, remember that early Unix run on a PDP-11 with 64kB of RAM. Consequently, providing a 1MB memory for multiple processes with up to 64kB of code and 64kB of data is more than generous.



              Furthermore, by manipulating segment registers and copying memory regions, a supervisor program can dynamically readjust memory regions as processes are created and destroyed in a way that's transparent to running processes. Early versions of Andrew Tannembaum's MINIX operating system relied on some of these ideas.






              share|improve this answer


















              • 2




                In order for this to provide real protection the code must not be allowed to change segment registers, disable interrupts, or invoke IN and OUT instructions. Those restrictions cannot be enforced in hardware but validation of the user code before transferring control would be possible. Similar just-in-time code validation is used in other places such as Java byte code, some versions of VM Ware, and the NaCl sandbox.
                – kasperd
                Aug 9 at 23:05






              • 1




                Minor nitpick: C does not have a requirement that the heap and stack be addressable through the same pointers. An implementation that chooses to make pointers only 16 bits wide must have coincident stack and data segments because there is no way to encode the required segment and a segment address in 16 bits but you don't have to have 16 bit pointers on an 8086.
                – JeremyP
                Aug 29 at 11:30














              up vote
              2
              down vote













              The 8086 is a 16-bit processor. One possibility for implementing some form of process isolation is to use the processor's segment registers (CS, DS, SS, ES). These allow a process's stack (SS), heap (DS, ES), and code (CS) reside in specific 64kB areas of a 1MB address space. This works by left shifting the 16-bit segment register by four bits and adding to that the 16-bit stack pointer (SS << 4 + SP), instruction pointer (CS << 4 + IP), or data address (e.g. CS << 4 + SI), to obtain the 20 bits of the physical address.



              Thus, through a suitable segment register setup one can isolate a process to at most 64kB, provided the process follows the convention of not altering the segment registers. For the requirements of C programs, where the heap and the stack must be addressable through the same 16-bit pointers, this convention restricts them to 64kB of data and 64kB of code. Although this might sound overly restrictive, remember that early Unix run on a PDP-11 with 64kB of RAM. Consequently, providing a 1MB memory for multiple processes with up to 64kB of code and 64kB of data is more than generous.



              Furthermore, by manipulating segment registers and copying memory regions, a supervisor program can dynamically readjust memory regions as processes are created and destroyed in a way that's transparent to running processes. Early versions of Andrew Tannembaum's MINIX operating system relied on some of these ideas.






              share|improve this answer


















              • 2




                In order for this to provide real protection the code must not be allowed to change segment registers, disable interrupts, or invoke IN and OUT instructions. Those restrictions cannot be enforced in hardware but validation of the user code before transferring control would be possible. Similar just-in-time code validation is used in other places such as Java byte code, some versions of VM Ware, and the NaCl sandbox.
                – kasperd
                Aug 9 at 23:05






              • 1




                Minor nitpick: C does not have a requirement that the heap and stack be addressable through the same pointers. An implementation that chooses to make pointers only 16 bits wide must have coincident stack and data segments because there is no way to encode the required segment and a segment address in 16 bits but you don't have to have 16 bit pointers on an 8086.
                – JeremyP
                Aug 29 at 11:30












              up vote
              2
              down vote










              up vote
              2
              down vote









              The 8086 is a 16-bit processor. One possibility for implementing some form of process isolation is to use the processor's segment registers (CS, DS, SS, ES). These allow a process's stack (SS), heap (DS, ES), and code (CS) reside in specific 64kB areas of a 1MB address space. This works by left shifting the 16-bit segment register by four bits and adding to that the 16-bit stack pointer (SS << 4 + SP), instruction pointer (CS << 4 + IP), or data address (e.g. CS << 4 + SI), to obtain the 20 bits of the physical address.



              Thus, through a suitable segment register setup one can isolate a process to at most 64kB, provided the process follows the convention of not altering the segment registers. For the requirements of C programs, where the heap and the stack must be addressable through the same 16-bit pointers, this convention restricts them to 64kB of data and 64kB of code. Although this might sound overly restrictive, remember that early Unix run on a PDP-11 with 64kB of RAM. Consequently, providing a 1MB memory for multiple processes with up to 64kB of code and 64kB of data is more than generous.



              Furthermore, by manipulating segment registers and copying memory regions, a supervisor program can dynamically readjust memory regions as processes are created and destroyed in a way that's transparent to running processes. Early versions of Andrew Tannembaum's MINIX operating system relied on some of these ideas.






              share|improve this answer














              The 8086 is a 16-bit processor. One possibility for implementing some form of process isolation is to use the processor's segment registers (CS, DS, SS, ES). These allow a process's stack (SS), heap (DS, ES), and code (CS) reside in specific 64kB areas of a 1MB address space. This works by left shifting the 16-bit segment register by four bits and adding to that the 16-bit stack pointer (SS << 4 + SP), instruction pointer (CS << 4 + IP), or data address (e.g. CS << 4 + SI), to obtain the 20 bits of the physical address.



              Thus, through a suitable segment register setup one can isolate a process to at most 64kB, provided the process follows the convention of not altering the segment registers. For the requirements of C programs, where the heap and the stack must be addressable through the same 16-bit pointers, this convention restricts them to 64kB of data and 64kB of code. Although this might sound overly restrictive, remember that early Unix run on a PDP-11 with 64kB of RAM. Consequently, providing a 1MB memory for multiple processes with up to 64kB of code and 64kB of data is more than generous.



              Furthermore, by manipulating segment registers and copying memory regions, a supervisor program can dynamically readjust memory regions as processes are created and destroyed in a way that's transparent to running processes. Early versions of Andrew Tannembaum's MINIX operating system relied on some of these ideas.







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited Aug 30 at 14:09

























              answered Aug 9 at 13:22









              Diomidis Spinellis

              1214




              1214







              • 2




                In order for this to provide real protection the code must not be allowed to change segment registers, disable interrupts, or invoke IN and OUT instructions. Those restrictions cannot be enforced in hardware but validation of the user code before transferring control would be possible. Similar just-in-time code validation is used in other places such as Java byte code, some versions of VM Ware, and the NaCl sandbox.
                – kasperd
                Aug 9 at 23:05






              • 1




                Minor nitpick: C does not have a requirement that the heap and stack be addressable through the same pointers. An implementation that chooses to make pointers only 16 bits wide must have coincident stack and data segments because there is no way to encode the required segment and a segment address in 16 bits but you don't have to have 16 bit pointers on an 8086.
                – JeremyP
                Aug 29 at 11:30












              • 2




                In order for this to provide real protection the code must not be allowed to change segment registers, disable interrupts, or invoke IN and OUT instructions. Those restrictions cannot be enforced in hardware but validation of the user code before transferring control would be possible. Similar just-in-time code validation is used in other places such as Java byte code, some versions of VM Ware, and the NaCl sandbox.
                – kasperd
                Aug 9 at 23:05






              • 1




                Minor nitpick: C does not have a requirement that the heap and stack be addressable through the same pointers. An implementation that chooses to make pointers only 16 bits wide must have coincident stack and data segments because there is no way to encode the required segment and a segment address in 16 bits but you don't have to have 16 bit pointers on an 8086.
                – JeremyP
                Aug 29 at 11:30







              2




              2




              In order for this to provide real protection the code must not be allowed to change segment registers, disable interrupts, or invoke IN and OUT instructions. Those restrictions cannot be enforced in hardware but validation of the user code before transferring control would be possible. Similar just-in-time code validation is used in other places such as Java byte code, some versions of VM Ware, and the NaCl sandbox.
              – kasperd
              Aug 9 at 23:05




              In order for this to provide real protection the code must not be allowed to change segment registers, disable interrupts, or invoke IN and OUT instructions. Those restrictions cannot be enforced in hardware but validation of the user code before transferring control would be possible. Similar just-in-time code validation is used in other places such as Java byte code, some versions of VM Ware, and the NaCl sandbox.
              – kasperd
              Aug 9 at 23:05




              1




              1




              Minor nitpick: C does not have a requirement that the heap and stack be addressable through the same pointers. An implementation that chooses to make pointers only 16 bits wide must have coincident stack and data segments because there is no way to encode the required segment and a segment address in 16 bits but you don't have to have 16 bit pointers on an 8086.
              – JeremyP
              Aug 29 at 11:30




              Minor nitpick: C does not have a requirement that the heap and stack be addressable through the same pointers. An implementation that chooses to make pointers only 16 bits wide must have coincident stack and data segments because there is no way to encode the required segment and a segment address in 16 bits but you don't have to have 16 bit pointers on an 8086.
              – JeremyP
              Aug 29 at 11:30










              up vote
              1
              down vote













              Yes, but it's not easy. There are at least two possible approaches:



              Option 1: software virutalization



              This one is the canonical/classical solution. Essentially, you write an emulator/interpreter for some sort of virtual machine that does have kernel/user privilege modes and memory protection. You need to ensure (or assume) your interpreter has no vm-escape bugs.



              Option 2: validating programs



              Write the program-loader not to accept arbitrary 8086 machine code, but instead only a highly structured subset with enforcement of memory safety. This requires designing such a subset, and again you need to ensure or assume your implementation doesn't have bugs that break the necessary invariants.



              Either way, to do POSIX or even something POSIX-like you're going to need lots of supplemental memory. There's no way to implement POSIX in the main memory size supported by the 8086, and both of these options will greatly further increase the required memory (and decrease speed).






              share|improve this answer
















              • 2




                "There's no way to implement POSIX in the main memory size supported by the 8086" ... this is almost certainly incorrect. Xenix ran on the 8086 and while it was not POSIX compliant it did include a large majority of POSIX's most complex functions. The "ELKS" port of Linux (see my answer above) also includes a large proportion of POSIX support, and reportedly runs in < 256KiB, including X11 support.
                – Jules
                Aug 9 at 20:21






              • 1




                ... furthermore, POSIX is a formalisation and extension of the System V Interface Definition, which was based on System V Release 2. SVR2's target machine was the DEC VAX 11/780, and I believe ran on the minimum configuration of that system, which is to say in 128KiB RAM. AT&T's 3B1 minicomputer (designed to run SVR3, the release of Unix that was most recent at the time POSIX was written, I believed) was available in a 512KiB configuration. I see nothing that suggests 1MiB of RAM isn't plenty for supporting full POSIX compliance (not to mention that bank switching could easily extend that).
                – Jules
                Aug 9 at 21:58











              • @Jules: I don't have the numbers in front of me right now, but having implemented much of it myself, I've done some casual estimates on lower bounds for the possible size, and it doesn't look good. On top of that (which didn't consider 8086 limitations), with POSIX requiring at least 32-bit int, just the arithmetic and argument-passing overhead is going to blow up size a good bit. And with no fpu, the math library and softfloat will be quite large.
                – R..
                Aug 10 at 2:36






              • 1




                32-bit int is only required by POSIX.1-2001, I believe. Earlier versions allowed 16-bit ints. I have here a copy of an 8086 "math.h" library including software emulation which is less than 20KiB in size. Admittedly, it doesn't contain all of the functions in the POSIX libm (it only has double versions and not float, and is missing a few of the less common functions). The full version would, I suspect, fit in around 50KiB, which isn't exactly a huge problem.
                – Jules
                Aug 10 at 7:40














              up vote
              1
              down vote













              Yes, but it's not easy. There are at least two possible approaches:



              Option 1: software virutalization



              This one is the canonical/classical solution. Essentially, you write an emulator/interpreter for some sort of virtual machine that does have kernel/user privilege modes and memory protection. You need to ensure (or assume) your interpreter has no vm-escape bugs.



              Option 2: validating programs



              Write the program-loader not to accept arbitrary 8086 machine code, but instead only a highly structured subset with enforcement of memory safety. This requires designing such a subset, and again you need to ensure or assume your implementation doesn't have bugs that break the necessary invariants.



              Either way, to do POSIX or even something POSIX-like you're going to need lots of supplemental memory. There's no way to implement POSIX in the main memory size supported by the 8086, and both of these options will greatly further increase the required memory (and decrease speed).






              share|improve this answer
















              • 2




                "There's no way to implement POSIX in the main memory size supported by the 8086" ... this is almost certainly incorrect. Xenix ran on the 8086 and while it was not POSIX compliant it did include a large majority of POSIX's most complex functions. The "ELKS" port of Linux (see my answer above) also includes a large proportion of POSIX support, and reportedly runs in < 256KiB, including X11 support.
                – Jules
                Aug 9 at 20:21






              • 1




                ... furthermore, POSIX is a formalisation and extension of the System V Interface Definition, which was based on System V Release 2. SVR2's target machine was the DEC VAX 11/780, and I believe ran on the minimum configuration of that system, which is to say in 128KiB RAM. AT&T's 3B1 minicomputer (designed to run SVR3, the release of Unix that was most recent at the time POSIX was written, I believed) was available in a 512KiB configuration. I see nothing that suggests 1MiB of RAM isn't plenty for supporting full POSIX compliance (not to mention that bank switching could easily extend that).
                – Jules
                Aug 9 at 21:58











              • @Jules: I don't have the numbers in front of me right now, but having implemented much of it myself, I've done some casual estimates on lower bounds for the possible size, and it doesn't look good. On top of that (which didn't consider 8086 limitations), with POSIX requiring at least 32-bit int, just the arithmetic and argument-passing overhead is going to blow up size a good bit. And with no fpu, the math library and softfloat will be quite large.
                – R..
                Aug 10 at 2:36






              • 1




                32-bit int is only required by POSIX.1-2001, I believe. Earlier versions allowed 16-bit ints. I have here a copy of an 8086 "math.h" library including software emulation which is less than 20KiB in size. Admittedly, it doesn't contain all of the functions in the POSIX libm (it only has double versions and not float, and is missing a few of the less common functions). The full version would, I suspect, fit in around 50KiB, which isn't exactly a huge problem.
                – Jules
                Aug 10 at 7:40












              up vote
              1
              down vote










              up vote
              1
              down vote









              Yes, but it's not easy. There are at least two possible approaches:



              Option 1: software virutalization



              This one is the canonical/classical solution. Essentially, you write an emulator/interpreter for some sort of virtual machine that does have kernel/user privilege modes and memory protection. You need to ensure (or assume) your interpreter has no vm-escape bugs.



              Option 2: validating programs



              Write the program-loader not to accept arbitrary 8086 machine code, but instead only a highly structured subset with enforcement of memory safety. This requires designing such a subset, and again you need to ensure or assume your implementation doesn't have bugs that break the necessary invariants.



              Either way, to do POSIX or even something POSIX-like you're going to need lots of supplemental memory. There's no way to implement POSIX in the main memory size supported by the 8086, and both of these options will greatly further increase the required memory (and decrease speed).






              share|improve this answer












              Yes, but it's not easy. There are at least two possible approaches:



              Option 1: software virutalization



              This one is the canonical/classical solution. Essentially, you write an emulator/interpreter for some sort of virtual machine that does have kernel/user privilege modes and memory protection. You need to ensure (or assume) your interpreter has no vm-escape bugs.



              Option 2: validating programs



              Write the program-loader not to accept arbitrary 8086 machine code, but instead only a highly structured subset with enforcement of memory safety. This requires designing such a subset, and again you need to ensure or assume your implementation doesn't have bugs that break the necessary invariants.



              Either way, to do POSIX or even something POSIX-like you're going to need lots of supplemental memory. There's no way to implement POSIX in the main memory size supported by the 8086, and both of these options will greatly further increase the required memory (and decrease speed).







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Aug 9 at 17:37









              R..

              1535




              1535







              • 2




                "There's no way to implement POSIX in the main memory size supported by the 8086" ... this is almost certainly incorrect. Xenix ran on the 8086 and while it was not POSIX compliant it did include a large majority of POSIX's most complex functions. The "ELKS" port of Linux (see my answer above) also includes a large proportion of POSIX support, and reportedly runs in < 256KiB, including X11 support.
                – Jules
                Aug 9 at 20:21






              • 1




                ... furthermore, POSIX is a formalisation and extension of the System V Interface Definition, which was based on System V Release 2. SVR2's target machine was the DEC VAX 11/780, and I believe ran on the minimum configuration of that system, which is to say in 128KiB RAM. AT&T's 3B1 minicomputer (designed to run SVR3, the release of Unix that was most recent at the time POSIX was written, I believed) was available in a 512KiB configuration. I see nothing that suggests 1MiB of RAM isn't plenty for supporting full POSIX compliance (not to mention that bank switching could easily extend that).
                – Jules
                Aug 9 at 21:58











              • @Jules: I don't have the numbers in front of me right now, but having implemented much of it myself, I've done some casual estimates on lower bounds for the possible size, and it doesn't look good. On top of that (which didn't consider 8086 limitations), with POSIX requiring at least 32-bit int, just the arithmetic and argument-passing overhead is going to blow up size a good bit. And with no fpu, the math library and softfloat will be quite large.
                – R..
                Aug 10 at 2:36






              • 1




                32-bit int is only required by POSIX.1-2001, I believe. Earlier versions allowed 16-bit ints. I have here a copy of an 8086 "math.h" library including software emulation which is less than 20KiB in size. Admittedly, it doesn't contain all of the functions in the POSIX libm (it only has double versions and not float, and is missing a few of the less common functions). The full version would, I suspect, fit in around 50KiB, which isn't exactly a huge problem.
                – Jules
                Aug 10 at 7:40












              • 2




                "There's no way to implement POSIX in the main memory size supported by the 8086" ... this is almost certainly incorrect. Xenix ran on the 8086 and while it was not POSIX compliant it did include a large majority of POSIX's most complex functions. The "ELKS" port of Linux (see my answer above) also includes a large proportion of POSIX support, and reportedly runs in < 256KiB, including X11 support.
                – Jules
                Aug 9 at 20:21






              • 1




                ... furthermore, POSIX is a formalisation and extension of the System V Interface Definition, which was based on System V Release 2. SVR2's target machine was the DEC VAX 11/780, and I believe ran on the minimum configuration of that system, which is to say in 128KiB RAM. AT&T's 3B1 minicomputer (designed to run SVR3, the release of Unix that was most recent at the time POSIX was written, I believed) was available in a 512KiB configuration. I see nothing that suggests 1MiB of RAM isn't plenty for supporting full POSIX compliance (not to mention that bank switching could easily extend that).
                – Jules
                Aug 9 at 21:58











              • @Jules: I don't have the numbers in front of me right now, but having implemented much of it myself, I've done some casual estimates on lower bounds for the possible size, and it doesn't look good. On top of that (which didn't consider 8086 limitations), with POSIX requiring at least 32-bit int, just the arithmetic and argument-passing overhead is going to blow up size a good bit. And with no fpu, the math library and softfloat will be quite large.
                – R..
                Aug 10 at 2:36






              • 1




                32-bit int is only required by POSIX.1-2001, I believe. Earlier versions allowed 16-bit ints. I have here a copy of an 8086 "math.h" library including software emulation which is less than 20KiB in size. Admittedly, it doesn't contain all of the functions in the POSIX libm (it only has double versions and not float, and is missing a few of the less common functions). The full version would, I suspect, fit in around 50KiB, which isn't exactly a huge problem.
                – Jules
                Aug 10 at 7:40







              2




              2




              "There's no way to implement POSIX in the main memory size supported by the 8086" ... this is almost certainly incorrect. Xenix ran on the 8086 and while it was not POSIX compliant it did include a large majority of POSIX's most complex functions. The "ELKS" port of Linux (see my answer above) also includes a large proportion of POSIX support, and reportedly runs in < 256KiB, including X11 support.
              – Jules
              Aug 9 at 20:21




              "There's no way to implement POSIX in the main memory size supported by the 8086" ... this is almost certainly incorrect. Xenix ran on the 8086 and while it was not POSIX compliant it did include a large majority of POSIX's most complex functions. The "ELKS" port of Linux (see my answer above) also includes a large proportion of POSIX support, and reportedly runs in < 256KiB, including X11 support.
              – Jules
              Aug 9 at 20:21




              1




              1




              ... furthermore, POSIX is a formalisation and extension of the System V Interface Definition, which was based on System V Release 2. SVR2's target machine was the DEC VAX 11/780, and I believe ran on the minimum configuration of that system, which is to say in 128KiB RAM. AT&T's 3B1 minicomputer (designed to run SVR3, the release of Unix that was most recent at the time POSIX was written, I believed) was available in a 512KiB configuration. I see nothing that suggests 1MiB of RAM isn't plenty for supporting full POSIX compliance (not to mention that bank switching could easily extend that).
              – Jules
              Aug 9 at 21:58





              ... furthermore, POSIX is a formalisation and extension of the System V Interface Definition, which was based on System V Release 2. SVR2's target machine was the DEC VAX 11/780, and I believe ran on the minimum configuration of that system, which is to say in 128KiB RAM. AT&T's 3B1 minicomputer (designed to run SVR3, the release of Unix that was most recent at the time POSIX was written, I believed) was available in a 512KiB configuration. I see nothing that suggests 1MiB of RAM isn't plenty for supporting full POSIX compliance (not to mention that bank switching could easily extend that).
              – Jules
              Aug 9 at 21:58













              @Jules: I don't have the numbers in front of me right now, but having implemented much of it myself, I've done some casual estimates on lower bounds for the possible size, and it doesn't look good. On top of that (which didn't consider 8086 limitations), with POSIX requiring at least 32-bit int, just the arithmetic and argument-passing overhead is going to blow up size a good bit. And with no fpu, the math library and softfloat will be quite large.
              – R..
              Aug 10 at 2:36




              @Jules: I don't have the numbers in front of me right now, but having implemented much of it myself, I've done some casual estimates on lower bounds for the possible size, and it doesn't look good. On top of that (which didn't consider 8086 limitations), with POSIX requiring at least 32-bit int, just the arithmetic and argument-passing overhead is going to blow up size a good bit. And with no fpu, the math library and softfloat will be quite large.
              – R..
              Aug 10 at 2:36




              1




              1




              32-bit int is only required by POSIX.1-2001, I believe. Earlier versions allowed 16-bit ints. I have here a copy of an 8086 "math.h" library including software emulation which is less than 20KiB in size. Admittedly, it doesn't contain all of the functions in the POSIX libm (it only has double versions and not float, and is missing a few of the less common functions). The full version would, I suspect, fit in around 50KiB, which isn't exactly a huge problem.
              – Jules
              Aug 10 at 7:40




              32-bit int is only required by POSIX.1-2001, I believe. Earlier versions allowed 16-bit ints. I have here a copy of an 8086 "math.h" library including software emulation which is less than 20KiB in size. Admittedly, it doesn't contain all of the functions in the POSIX libm (it only has double versions and not float, and is missing a few of the less common functions). The full version would, I suspect, fit in around 50KiB, which isn't exactly a huge problem.
              – Jules
              Aug 10 at 7:40










              up vote
              0
              down vote













              The Minix operating system implemented Virtual Memory Management on 8086 in software. Minix source code is available.






              share|improve this answer
























                up vote
                0
                down vote













                The Minix operating system implemented Virtual Memory Management on 8086 in software. Minix source code is available.






                share|improve this answer






















                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  The Minix operating system implemented Virtual Memory Management on 8086 in software. Minix source code is available.






                  share|improve this answer












                  The Minix operating system implemented Virtual Memory Management on 8086 in software. Minix source code is available.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Aug 11 at 5:20









                  Steve J

                  771




                  771



























                       

                      draft saved


                      draft discarded















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fretrocomputing.stackexchange.com%2fquestions%2f7222%2fcan-one-isolate-processes-on-a-8086%23new-answer', 'question_page');

                      );

                      Post as a guest













































































                      Comments

                      Popular posts from this blog

                      What does second last employer means? [closed]

                      List of Gilmore Girls characters

                      One-line joke