View mode: basic / threaded / horizontal-split · Log in · Help
June 17, 2008
Re: Walter did yo realy go Ohhhh?
Nick Sabalausky Wrote:
> ... From the security perspective, for instance, there are differences 
> (With a VM, you can sanbox whatever you want, however you want,
> without requiring a physical CPU that supports the appropriate security
> features.) 

It seems that security/verifiability, and ease of executing on an unknown target processor are the two major benefits of a VM.

However, you might be interested in looking at software based fault isolation if you have not seen it. It may make you reconsider how much you need a VM to implement code security. There is a pretty simple explanation here:

 http://www.cs.unm.edu/~riesen/prop/node16.html
June 17, 2008
Re: Walter did yo realy go Ohhhh?
"David Jeske" <davidj@gmail.com> wrote in message 
news:g37coj$2q9u$1@digitalmars.com...
> Nick Sabalausky Wrote:
>> ... From the security perspective, for instance, there are differences
>> (With a VM, you can sanbox whatever you want, however you want,
>> without requiring a physical CPU that supports the appropriate security
>> features.)
>
> It seems that security/verifiability, and ease of executing on an unknown 
> target processor are the two major benefits of a VM.
>
> However, you might be interested in looking at software based fault 
> isolation if you have not seen it. It may make you reconsider how much you 
> need a VM to implement code security. There is a pretty simple explanation 
> here:
>
>  http://www.cs.unm.edu/~riesen/prop/node16.html
>
>


Thanks. Interesting read.

Although expanding *every* write/jump/(and maybe read) from one instruction 
each into five instructions each kinda makes me cringe (But maybe it 
wouldn't need to be a 1-to-5 on every single write/jump after some sort of 
optimizing-compiler-style magic?). I know that paper claims an overhead of 
only 4.3% (I wish it had a link to an online copy of the benchmark 
tests/results), but it was written ten years ago and, as I understand it, 
pipelining and cache concerns make a far larger speed difference today than 
they did back then. And, while I'm no x86 asm expert, what they're proposing 
strikes me as something that might be rather pipeline/cache-unfriendly.

Plus, maybe this has changed in recent years, but back when I was doing x86 
asm (also about ten or so years ago), the x86 had *very* few general-purpose 
registers. Like 4 or 5, IIRC. If that's still the case, that would just make 
performance worse since the 5-6 extra registers this paper suggests would 
turn into additional memory access (And I imagine they'd be cache-killing 
accesses). I'm not sure that they mean by i860, though, 
Intel-something-or-other probably, but I assume i860 isn't the same as 
i86/x86.

Granted, I know performance is a secondary, at best, concern for the types 
of situations where you would want a sandbox. But, I can't help thinking 
about rasterized drawing, video decompression, and other things Flash does, 
and wonder what Flash would be like if the browser placed the flash plugin 
(I mean the actual browser plugin, not an SWF) into this style of sandbox.

Of course, VMs have overhead too (though I doubt Flash's rendering is done 
in a VM), but I'm not up-to-date enough on all the modern VM implementation 
details to know how a modern VM's overhead would compare to this. Maybe I'm 
just confused, but I wonder if a just-in-time-compiled VM would have the 
potential to be faster than this, simply because the VM's bytecode 
(presumably) has no way of expressing unsafe behaviors, and therefore 
anything translated by the VM itself from that "safe" bytecode to real 
native code would not need those extra runtime checks. (Hmm, kinda weird to 
think of a VM potentially being *faster* than native code for something).
June 17, 2008
Re: Walter did yo realy go Ohhhh?
Nick Sabalausky wrote:
> "David Jeske" <davidj@gmail.com> wrote in message 
> news:g37coj$2q9u$1@digitalmars.com...
>> Nick Sabalausky Wrote:
>>> ... From the security perspective, for instance, there are differences
>>> (With a VM, you can sanbox whatever you want, however you want,
>>> without requiring a physical CPU that supports the appropriate security
>>> features.)
>> It seems that security/verifiability, and ease of executing on an unknown 
>> target processor are the two major benefits of a VM.
>>
>> However, you might be interested in looking at software based fault 
>> isolation if you have not seen it. It may make you reconsider how much you 
>> need a VM to implement code security. There is a pretty simple explanation 
>> here:
>>
>>  http://www.cs.unm.edu/~riesen/prop/node16.html
>>
>>
> 
> 
> Thanks. Interesting read.
> 
> Although expanding *every* write/jump/(and maybe read) from one instruction 
> each into five instructions each kinda makes me cringe (But maybe it 
> wouldn't need to be a 1-to-5 on every single write/jump after some sort of 
> optimizing-compiler-style magic?). I know that paper claims an overhead of 
> only 4.3% (I wish it had a link to an online copy of the benchmark 
> tests/results), but it was written ten years ago and, as I understand it, 
> pipelining and cache concerns make a far larger speed difference today than 
> they did back then. And, while I'm no x86 asm expert, what they're proposing 
> strikes me as something that might be rather pipeline/cache-unfriendly.
> 
> Plus, maybe this has changed in recent years, but back when I was doing x86 
> asm (also about ten or so years ago), the x86 had *very* few general-purpose 
> registers. Like 4 or 5, IIRC. If that's still the case, that would just make 
> performance worse since the 5-6 extra registers this paper suggests would 
> turn into additional memory access (And I imagine they'd be cache-killing 
> accesses). I'm not sure that they mean by i860, though, 
> Intel-something-or-other probably, but I assume i860 isn't the same as 
> i86/x86.
> 
> Granted, I know performance is a secondary, at best, concern for the types 
> of situations where you would want a sandbox. But, I can't help thinking 
> about rasterized drawing, video decompression, and other things Flash does, 
> and wonder what Flash would be like if the browser placed the flash plugin 
> (I mean the actual browser plugin, not an SWF) into this style of sandbox.
> 
> Of course, VMs have overhead too (though I doubt Flash's rendering is done 
> in a VM), but I'm not up-to-date enough on all the modern VM implementation 
> details to know how a modern VM's overhead would compare to this. Maybe I'm 
> just confused, but I wonder if a just-in-time-compiled VM would have the 
> potential to be faster than this, simply because the VM's bytecode 
> (presumably) has no way of expressing unsafe behaviors, and therefore 
> anything translated by the VM itself from that "safe" bytecode to real 
> native code would not need those extra runtime checks. (Hmm, kinda weird to 
> think of a VM potentially being *faster* than native code for something).
> 
> 
could you explain please why there's a need for a sandbox in the
first-place?
I think that security should be enforced by the OS. On windows, I see
the need for external means of security like a VM since the OS doesn't
do security (Microsoft's sense of the word is to annoy the end user with
a message box, requiring him to press OK several times...)
But on other OSes that seems unnecessary since the OS provides ways to
manage security for code. linux has se-linux and there are newer OSes
developed with the concept of capabilities.
so, unless I'm on windows, what are the benefits of a VM that I won't
get directly from the OS?

--Yigal
June 17, 2008
Re: Walter did yo realy go Ohhhh?
Nick Sabalausky wrote:
> "David Jeske" <davidj@gmail.com> wrote in message 
> news:g37coj$2q9u$1@digitalmars.com...
>> Nick Sabalausky Wrote:
>>> ... From the security perspective, for instance, there are differences
>>> (With a VM, you can sanbox whatever you want, however you want,
>>> without requiring a physical CPU that supports the appropriate security
>>> features.)
>> It seems that security/verifiability, and ease of executing on an unknown 
>> target processor are the two major benefits of a VM.
>>
>> However, you might be interested in looking at software based fault 
>> isolation if you have not seen it. It may make you reconsider how much you 
>> need a VM to implement code security. There is a pretty simple explanation 
>> here:
>>
>>  http://www.cs.unm.edu/~riesen/prop/node16.html
>>
>>
> 
> 
> Thanks. Interesting read.
> 
> Although expanding *every* write/jump/(and maybe read) from one instruction 
> each into five instructions each kinda makes me cringe (But maybe it 
> wouldn't need to be a 1-to-5 on every single write/jump after some sort of 
> optimizing-compiler-style magic?). I know that paper claims an overhead of 
> only 4.3% (I wish it had a link to an online copy of the benchmark 
> tests/results), but it was written ten years ago and, as I understand it, 
> pipelining and cache concerns make a far larger speed difference today than 
> they did back then. And, while I'm no x86 asm expert, what they're proposing 
> strikes me as something that might be rather pipeline/cache-unfriendly.

It's quite unnecessary on an x86. The x86 has page protection 
implemented in hardware. It's impossible to write to any memory which 
the OS hasn't explicitly given you.
The problem occurs when the OS has buggy APIs which have exposed too much...

> 
> Plus, maybe this has changed in recent years, but back when I was doing x86 
> asm (also about ten or so years ago), the x86 had *very* few general-purpose 
> registers. Like 4 or 5, IIRC. If that's still the case, that would just make 
> performance worse since the 5-6 extra registers this paper suggests would 
> turn into additional memory access (And I imagine they'd be cache-killing 
> accesses). I'm not sure that they mean by i860, though, 
> Intel-something-or-other probably, but I assume i860 isn't the same as 
> i86/x86.

It was an old Intel CPU.

> 
> Granted, I know performance is a secondary, at best, concern for the types 
> of situations where you would want a sandbox. But, I can't help thinking 
> about rasterized drawing, video decompression, and other things Flash does, 
> and wonder what Flash would be like if the browser placed the flash plugin 
> (I mean the actual browser plugin, not an SWF) into this style of sandbox.
> 
> Of course, VMs have overhead too (though I doubt Flash's rendering is done 
> in a VM), but I'm not up-to-date enough on all the modern VM implementation 
> details to know how a modern VM's overhead would compare to this. Maybe I'm 
> just confused, but I wonder if a just-in-time-compiled VM would have the 
> potential to be faster than this, simply because the VM's bytecode 
> (presumably) has no way of expressing unsafe behaviors, and therefore 
> anything translated by the VM itself from that "safe" bytecode to real 
> native code would not need those extra runtime checks. (Hmm, kinda weird to 
> think of a VM potentially being *faster* than native code for something).
> 
>
June 17, 2008
Re: Walter did yo realy go Ohhhh?
"Don" <nospam@nospam.com.au> wrote in message 
news:g37vm8$114c$1@digitalmars.com...
> Nick Sabalausky wrote:
>> "David Jeske" <davidj@gmail.com> wrote in message 
>> news:g37coj$2q9u$1@digitalmars.com...
>>> Nick Sabalausky Wrote:
>>>> ... From the security perspective, for instance, there are differences
>>>> (With a VM, you can sanbox whatever you want, however you want,
>>>> without requiring a physical CPU that supports the appropriate security
>>>> features.)
>>> It seems that security/verifiability, and ease of executing on an 
>>> unknown target processor are the two major benefits of a VM.
>>>
>>> However, you might be interested in looking at software based fault 
>>> isolation if you have not seen it. It may make you reconsider how much 
>>> you need a VM to implement code security. There is a pretty simple 
>>> explanation here:
>>>
>>>  http://www.cs.unm.edu/~riesen/prop/node16.html
>>>
>>>
>>
>>
>> Thanks. Interesting read.
>>
>> Although expanding *every* write/jump/(and maybe read) from one 
>> instruction each into five instructions each kinda makes me cringe (But 
>> maybe it wouldn't need to be a 1-to-5 on every single write/jump after 
>> some sort of optimizing-compiler-style magic?). I know that paper claims 
>> an overhead of only 4.3% (I wish it had a link to an online copy of the 
>> benchmark tests/results), but it was written ten years ago and, as I 
>> understand it, pipelining and cache concerns make a far larger speed 
>> difference today than they did back then. And, while I'm no x86 asm 
>> expert, what they're proposing strikes me as something that might be 
>> rather pipeline/cache-unfriendly.
>
> It's quite unnecessary on an x86. The x86 has page protection implemented 
> in hardware. It's impossible to write to any memory which the OS hasn't 
> explicitly given you.
> The problem occurs when the OS has buggy APIs which have exposed too 
> much...
>

What's the difference between that x86 page protection and whatever that new 
feature is (something about process protection I think?) that CPUs have just 
been starting to get?  (boy, I'm out of the loop on this stuff)
June 17, 2008
Re: Walter did yo realy go Ohhhh?
Nick Sabalausky wrote:
> "Don" <nospam@nospam.com.au> wrote in message 
> news:g37vm8$114c$1@digitalmars.com...
>> Nick Sabalausky wrote:
>>> "David Jeske" <davidj@gmail.com> wrote in message 
>>> news:g37coj$2q9u$1@digitalmars.com...
>>>> Nick Sabalausky Wrote:
>>>>> ... From the security perspective, for instance, there are differences
>>>>> (With a VM, you can sanbox whatever you want, however you want,
>>>>> without requiring a physical CPU that supports the appropriate security
>>>>> features.)
>>>> It seems that security/verifiability, and ease of executing on an 
>>>> unknown target processor are the two major benefits of a VM.
>>>>
>>>> However, you might be interested in looking at software based fault 
>>>> isolation if you have not seen it. It may make you reconsider how much 
>>>> you need a VM to implement code security. There is a pretty simple 
>>>> explanation here:
>>>>
>>>>  http://www.cs.unm.edu/~riesen/prop/node16.html
>>>>
>>>>
>>>
>>> Thanks. Interesting read.
>>>
>>> Although expanding *every* write/jump/(and maybe read) from one 
>>> instruction each into five instructions each kinda makes me cringe (But 
>>> maybe it wouldn't need to be a 1-to-5 on every single write/jump after 
>>> some sort of optimizing-compiler-style magic?). I know that paper claims 
>>> an overhead of only 4.3% (I wish it had a link to an online copy of the 
>>> benchmark tests/results), but it was written ten years ago and, as I 
>>> understand it, pipelining and cache concerns make a far larger speed 
>>> difference today than they did back then. And, while I'm no x86 asm 
>>> expert, what they're proposing strikes me as something that might be 
>>> rather pipeline/cache-unfriendly.
>> It's quite unnecessary on an x86. The x86 has page protection implemented 
>> in hardware. It's impossible to write to any memory which the OS hasn't 
>> explicitly given you.
>> The problem occurs when the OS has buggy APIs which have exposed too 
>> much...
>>
> 
> What's the difference between that x86 page protection and whatever that new 
> feature is (something about process protection I think?) that CPUs have just 
> been starting to get?  (boy, I'm out of the loop on this stuff) 

The page protection is implemented by the OS, and only applies to user 
apps, not kernel drivers.

From reading the AMD64 System Programming manual, it seems that the 
'secure virtual machine' feature is roughly the same thing, except at an 
even deeper level: it prevents the OS kernel from accessing specific 
areas of memory or I/O. So it even allows you to sandbox the kernel (!)
June 18, 2008
Re: Walter did yo realy go Ohhhh?
Yigal Chripun wrote:
> could you explain please why there's a need for a sandbox in the
> first-place?

OS security protects the system and the other users from you.

A sandbox protects you yourself from code that's run "as you".

(That is, protects your files, etc.)
June 18, 2008
Re: Walter did yo realy go Ohhhh?
Don wrote:
> Nick Sabalausky wrote:
> 
>> What's the difference between that x86 page protection and whatever 
>> that new feature is (something about process protection I think?) that 
>> CPUs have just been starting to get?  (boy, I'm out of the loop on 
>> this stuff) 
> 
> The page protection is implemented by the OS, and only applies to user 
> apps, not kernel drivers.
> 
>  From reading the AMD64 System Programming manual, it seems that the 
> 'secure virtual machine' feature is roughly the same thing, except at an 
> even deeper level: it prevents the OS kernel from accessing specific 
> areas of memory or I/O. So it even allows you to sandbox the kernel (!)

Gawhhhhh.

But seriously, that is the way to let you run virtual machines where 
there could be several kernels, possibly of several operating systems.

So, when processors evolve, and operating systems increasingly take 
advantage of the features of the existing processors, having the /next/ 
processor generation have yet another level of "priority" guarantees 
that the operating systems for the previous processor can all be 
virtualised with 100% accuracy, 100% efficiency, and 100% security.

Without this it would be virtually (no pun intended) impossible.

---

Now, with the majority of operating systems today (at least most Linuxes 
are compiled with the 386 as the target while it's about 5 years since 
"anybody ever" has tried to run Linux on a 386 -- dunno about Windows, 
but I assume most Windows versions are theoretically runnable on a 386, 
too), this would not be a priority.

Actually it is a matter of Prudent Development. The only way you (as a 
processor manufacturer) can literally guarantee that the previous 
processor can be fully virtualised, is to add yet another layer of 
privilege.
June 19, 2008
Re: Walter did yo realy go Ohhhh?
Georg Wrede wrote:
> Yigal Chripun wrote:
>> could you explain please why there's a need for a sandbox in the
>> first-place?
> 
> OS security protects the system and the other users from you.
> 
> A sandbox protects you yourself from code that's run "as you".
> 
> (That is, protects your files, etc.)

I disagree. OS security can and does protect the user's files from code
that's run "as the user" <-this is a bad concept.

current OSes use ACLs (windows, linux, etc..) and there's nothing
stopping you from defining a file to be read only, or non-executable to
protect data, and the current practice is to define "users" for deamons
in order to protect data. that's why apache runs with user www-data with
its own ACL rules. you can achieve perfect security with this scheme if
you invest enough time to create a separate "user" for each process.
as an example, I can run my browser as a different limited user or use a
browser which runs inside a sandbox. I can get the same protection from
both but the sandbox solution has more overhead.

it's easy to see all the problems with manually defining ACLs.
Newer OSes based on the concept of "capabilities" remove all those
problems. such OSes give processes defined capabilities unrelated to any
concept of a user (the concept of users is defined on top of the
capabilities mechanism).
Capabilities are basically the same as OOP - simplified example:
currently OSes are written in a procedural way, there are global data
structures and global system calls. i.e. you print to screen via
Stdout(text); in D which just calls in the end the appropriate syscall.
in a capabilities based OS, there is no such global syscalls/functions.
you need to hold an output instance (a handle in the OS - a Capability)
in order to call its print method. only if the process has that instance
it can print to the screen. security is implemented via the explicit
passing of such instances. so if the program received an output
instance, it received the right to print to the screen.

No sandboxes/VMs/any other emulation layer is needed.

--Yigal
June 19, 2008
Re: Walter did yo realy go Ohhhh?
Yigal Chripun wrote:
> Georg Wrede wrote:
>>Yigal Chripun wrote:
>>
>>>could you explain please why there's a need for a sandbox in the
>>>first-place?
>>
>>OS security protects the system and the other users from you.
>>
>>A sandbox protects you yourself from code that's run "as you".
>>
>>(That is, protects your files, etc.)
> 
> I disagree. OS security can and does protect the user's files from code
> that's run "as the user" <-this is a bad concept.

If the code that gets run "as the user" is malicious, and there are no 
additional guards, then the code could chmod any read-only file you have 
and then edit it, according to its malicious goals. In practice, these 
additional guards constitute the Sand Box.

> current OSes use ACLs (windows, linux, etc..) and there's nothing
> stopping you from defining a file to be read only, or non-executable to
> protect data, and the current practice is to define "users" for deamons
> in order to protect data.

Not on my servers, they don't. I rely solely on user/group stuff. And I 
find it adequate.

> that's why apache runs with user www-data with
>  its own ACL rules.

Apache has run as "www-data" or whatever, since the beginning of time, 
and that has been because it is natural and "obvious" to give the WWW 
server its own identity.

> you can achieve perfect security with this scheme if
> you invest enough time to create a separate "user" for each process. 

If this were so simple, then we'd have no issue with this entire subject 
-- for the last 5 years.

To put it another way, if the WWW server could run every user's code as 
a separate OS user, then of course things would be different. But the 
average Unix (Linux, etc) only has 16 bits of information to identify 
the "user". And sites like Google have users in the Billions. So, it's 
not a viable option.

> as an example, I can run my browser as a different limited user or use a
> browser which runs inside a sandbox. I can get the same protection from
> both but the sandbox solution has more overhead.

Server and client problems should be kept separate in one's mind set.

> it's easy to see all the problems with manually defining ACLs.
> Newer OSes based on the concept of "capabilities" remove all those
> problems.

"All those problems". You've been listening to marketing talk.

> such OSes give processes defined capabilities unrelated to any
> concept of a user (the concept of users is defined on top of the
> capabilities mechanism).

I was the Oracle DB Head Administrator in the early '90s at a local 
University. The concept of Roles was introduced then by Oracle. I 
actually got pretty excited about this. Instead of Mary, Jo-Anne, and 
Jane all having their respective read, write and update rights, I could 
define Roles (which is pretty near the Capabilities concept), so that 
Updater of Student Credits, Updater of Student Addresses, Updater of 
Class Information, etc. could all be defined, and when any of the girls 
went on holidays, I could simply assign the Role to the back-up person, 
instead of spending days on fixing read-update-write rights for 
individual table columns and/or views.

> Capabilities are basically the same as OOP - simplified example:
> currently OSes are written in a procedural way, there are global data
> structures and global system calls. i.e. you print to screen via
> Stdout(text); in D which just calls in the end the appropriate syscall.
> in a capabilities based OS, there is no such global syscalls/functions.
...
> No sandboxes/VMs/any other emulation layer is needed.

Gee, nice.

Still, D has to relate to what's going on today.
« First   ‹ Prev
1 2
Top | Discussion index | About this forum | D home