…and it was like, bleep bleep bleep bleep bleep bleep bleep…


I got this message from the contact section of the blog, I thought it would be interesting to talk about.


You’ve done a lot of great editorial’s on your blog. I wanted to know a bit
more on a topic that you might be able to help me with. Apple Computers. I am
one of the growing population that is looking longingly at OS X and 12″
powerbooks and I’m considering switching over (at least for a new laptop while
I still have my home WinXP desktop). You comment on your Apple a lot and I
wanted to know…well, just about anything you have to tell. I am not really a
gamer, most of what I do is music so I’m guessing iTunes would become my new
best friend. As for the rest, what is so special about your Powerbook, what is
so special about OS X, switch because…, Don’t switch because…, etc. Besides
all that, I am really dying for a 12″ powerbook but I thought about testing the
waters with an eMac or one of the (supposed) new iMacs in September. Thoughts,
sugguestions? I know you have a lot of work and a head full of content already
(I also wouldn’t want you to get in trouble with your employer). However, if
you find time to write my dream article, it would be awesome! With the iPod’s,
etc. there are a LOT of people that are thinking the same things as me, but not
a lot of helpful content. Thanks!

 

Well, that’s a pretty interesting question.  I think the first thing to realize is that I’m not a switcher.  To me, switching means substituting one thing for another.  However I never replaced anything when I went to Apple, i merely added to my current collection of computers.  I love computers, software and technology and I’m always interested in learning and using anything new or great that’s out there.  There’s so much awesome stuff that’s happening out there that it would be kind of lame to just pay attention to what one company was producing.

My current computer setup includes a few kick ass development machines at work (mmmm RAMalicious), the PowerBook I love, a toshiba laptop for futzing around on, and a Shuttle HTPC machine that I run XP Media Center and/or a Linux distro on.

Back when Apple announced OSX my curiosity was piqued.  I’d never really been interested in apple before because I found their software frustratingly buggy.  Back when NT4 came out I gladly dropped the entire Win9x line and never looked back.  I’m not much of a PC gamer; rather I find consoles much more gratifying in their simplicity and ItJustWorks philosophy.  Greybeards might have lamented the execution of the OS(9) they loved, but for me it was necessary for Apple to even enter my awareness.  Suddenly I was intrigued.  An OS built on top of a solid foundation that could compete on technical merits with the NT and unices that I was accustomed to.  On top of that, it came with a rather nice window manager that not only looked good but was very easy to use.  I saw the disappointment at the “beta” level quality of 10.0 and i decided to wait a bit.  When 10.1 came out it was a good time for me to get a laptop and i settled on an nice 12 inch white iBook.  Apple’s desktops have never really appealed to me.  They always seem limited overly limited in areas such as speed, extensibility, price.  However, their laptops have always been another matter altogether.  The iBook was attractive to me for many reasons, but at the time it was specifically the battery life, integrated wireless, and a lot of another nice things in a small attractive package at a really nice price. 


It was a decision that I was very happy with.  Not only did the laptop serve all my needs, including regular consumer oriented things and also development tasks, but I also saw great progress and innovation made by apple in the OS and the applications.  iLife is one of my favorite integrated packages of all time.  I can barely use any other music player, it manages 10k+ images for me, and it’s been a lot of fun creating and burning some movies for my family as well.  Mail.app is just a perfect small, light email client.  It manages multiple accounts with ease, has the best spam filter I’ve ever used (it’s probably mis-categorized one email in the last year), and searches instantaneously.  To me Apple excels in getting the fundamentals right.  They can’t necessarily do a lot at first, but what they can do they do well.  On top of that, if they don’t’ do something that users are clamoring for you’ll usually see it in the next release.  I like this model of software development and I’d like to see it come up in other places.  They also move forward at a pretty steady pace, instead of punctuated releases every few years.  You can argue whether or not that’s good from a business standpoint, but from a user standpoint, it just excites me because there is always new great stuff coming out every few months, and I consistently feel that things are improving across the board.


On top of that I found that software development was a lot easier OOB than on a windows machine.  I have a lot more experience with shells like bash, and having that available on the machine without any fuss was so nice.  I also am a fan of package management tools as they take away of a lot of the tedium of getting programs on your machine.  The Fink project brought that kind of flexible and powerful system to OSX.  I really appreciate being able to do something as simple as “fink install ncftp“ and know that it will be ready in a few minutes.  I also love that you get X11 support for free.  I’ve always felt that the ones available for windows were just kind of kludgey.  You can get all of this on windows, but I’ve never been too comfortable with it.  I’ve tried SFU and I’m never feeling like all is well.  Even though the commands are there, I find that they tend to break in unexpected ways.  I don’t have the time to track down and work around these issues, I’d prefer that they just work.


When I got here my iBook was starting to feel its age.  One area where Apple doesn’t seem to do so well in is scaling to their lower end hardware.  I was using panther and finding that some things were just crawling.  I started looking around to see what I would get for my next laptop.  There were a lot of interesting PCs out there, but I was somewhat discouraged due to the lack of development of XP and my distrust of linux with laptop hardware (remember ItJustWorks).  One of my friends had gotten one of the new 15 inch PowerBooks and I was in love.  Except for the fact that the battery life was decimated (you really feel the difference between 6 hours and 2), it was just a much better experience.  A really nice screen, 802.11g wireless, nice big hard drive, etc. etc.  I’m also really looking forward to Tiger now.  One of my friends sent me more information on Spotlight and it’s exciting me even more!


Finally, I have to say that it’s nice to be using an OS that I feel safer under than with Windows.  A lot of what Apple did with their OS would map directly onto core tenets in Trustworthy Computing, things that I don’t feel consumer Windows users will see until XP SP2 is released (years after we announced TWC and years after OSX has already been demonstrating its value).  I think that in that regard they’ve been incredibly responsible and they deserve kudos.  I feel pretty safe on windows, and I’m usually very responsible.  I run behind a firewall, get updates, and run a virus scanner.  But even still, there is a lot of exposed surface that I would prefer to be sealed.


Anyways, if there’s anything else you want to know feel free to ask!!


Oh, and as to real reason why i got a mac… well, Ellen Feiss told me to of course.


Comments (48)

  1. Ellen Feiss is *such* a babe

  2. DrPizza says:

    ", I find that they tend to break in unexpected ways"

    _I_ find they tend to work like grown up *nixes make them work, rather than like Lunix’s value-added versions.

  3. DrPizza: I meant more in the sense of limitations that aren’t clear. One case is when passing the results of one operation to another. i.e. like how i could do:

    wc `find . -name "somePattern"`

    If the `find …` expands to more than a certain length, it just gets truncated and wc doesn’t get everything.

    things of that nature are somewhat frustrating.

  4. Daniel Edwards says:

    I have been very tempted to purchase an OSX based computer lately. The only thing that keeps me away is that pesky price.

  5. DrPizza says:

    "If the `find …` expands to more than a certain length, it just gets truncated and wc doesn’t get everything. "

    But the same thing happens on other platforms; it’s not SFU-specific.

    It might be that Windows’ limit is more miserly than other platforms, but the mere existance of the limit is not unique. For instance, Linux’s limit is, IIRC, 128 kibytes for environment plus command-line arguments combined.

    Consequentially, such commands can fail for the exact same reason on non-SFU platforms.

  6. DrPizza says:

    "and years after OSX has already been demonstrating its value"

    The only thing OS X has demonstrated is that it’s not sufficiently widespread to warrant attention. It’s had a reasonable number of security holes discovered, and Apple’s bugfix policy has left something to be desired. It offers no architectural benefits over Windows, either. Indeed, because Windows runs on platforms with proper page protection bits, OS X is perhaps at something of a disadvantage, architecturally.

  7. DrPizza: "The only thing OS X has demonstrated is that it’s not sufficiently widespread to warrant attention."

    I never claimed there was a single architectural advantage. I argued that there was a policy advantage. I’d rather install and use an OS that i don’t need to jump through hoops to install. Work in XP SP2 to disable services from listening to the network, and to also turn on a firewall by default will do wonders for mitigating the problems that will arise with a regular windows install.

    Trustworthy computing in only partly about architecture. Imagine a web browser that was 100% bug free. Now, on every bit of data it received from the network it prompted the user if they wanted to proceed, if it was ok to display an image, if it was ok to render a table, etc. etc. At some point the user is going to make a mistake and allow something to happen that in this context is damaging to them. There was nothing architecturally unsound about this. It’s just an extreme case of regular user prompts. For example, a user might want to install the flash plug-in, but not the gator program. To the computer they both look the same, an executable with a certificate that the user has to approve. So it goes to the user to find out what to do. However, users are often annoyed with interruptions from the computer and they just want to get back to what they were doing. If clicking "ok" will do that, then that’s usually what they’ll do (unfortunately facing the consequences later).

    Conversely, if you block all programs from executing and only proceed when a user explicitly goes out of their way to invoke the action, then you have mitigated one risk (that users will perform insecure acts out of annoyance).

    Similarly, architecturally, both OSs provide a firewall, which makes them pretty equivalent. Also, in terms of policy, both have it disabled by default, which again makes them pretty equivalent. However, on OSX no network ports are accessible from the network without explicitly enabling them first. Contrast this with Windows which has numerous ports open and listening by default. While architecturally equal, the default policy opens an extremely large surface area of attack and windows _has_ been affected by this.

    As a smart user I can take steps to make windows extremely secure. (Running 2k3 was one of those steps). However, I would like an OS to place that burden squarely on the users shoulders.

  8. DrPizza: Didn’t we just have a conversation about Windows’ miserly use of GDI handles being an annoyance that you wish they didn’t have? 🙂

    I see these issues as being the same thing. I’m forced to wrap my head around, and get around unfortuante limitations that _commonly_ affect me.

    The GDI issue didn’t affect me, but it affects you.

    The shell limits affect me, but don’t normally on other platforms.

    In the end it boils down to what just works (for me). I’ve found that not only is the OOB experience nicer for these tasks, but it continues to be nicer even after installing SFU. I could probably find ways to get around this in windows, possibly through the use of a registry key (like for CSRSS), however I’ll have used a lot of energy to reach where I started on another system.

  9. DrPizza says:

    Ah, but hitting such limits, though annoying, should not be unexpected; one should expect limits to exist. After all, computers have only a finite amount of storage available to them. The limits might be very high, but they nevertheless exist.

    As for open ports and so on; they should have no influence on OS security whatsoever. Buffer overflows (far and away the biggest problem) are inexcusable laziness.

  10. DrPizza says:

    (though in practice, they make up a small proportion of Windows’ security issue; most problems are due to things like IE’s security zones being improperly implemented, not buggy services with open ports. Though such bugs are problematic because they permit worms, they’re a relatively small proportion of all the bugs a platform might suffer. And in practice, you can get good penetration with an exploit simply by asking people to run a program that you’ve sent them (even if it’s in a password-protected .zip file…) which is a problem that *no* platform is immune to)

  11. damien morton says:

    Actually, DrPizza, no platform in widespread circulation can protect from the "please run this exe" attack, but capability-based platforms, such as ErosOS can.

    Theres no reasaon why random downloaded executables should be able to do anything at all. the act if installing such an executable should involve it requesting permission to use the resources and services it needs.

  12. Tom Meschter says:

    Dr. Pizza:

    In an ideal world open ports would have no influence on OS security, that is true, but then in an ideal world we wouldn’t have to worry about buffer overflows, page protection bits, or users running trojaned programs either. Obviously we don’t live in an ideal world.

    Because of that, security must be addressed on multiple levels from multiple approaches. Buffer overflows may be "inexcusable laziness" but they still happen. So we reduce the risk by running behind firewalls, closing ports we don’t need to have open, etc., etc. Such an approach can also mitigate potentioal architectural "deficiences", such as the lack of page protection bits. It’s simply good and proper practice given that code isn’t perfect.

  13. Dr Pizza says:

    [quote]Actually, DrPizza, no platform in widespread circulation can protect from the "please run this exe" attack, but capability-based platforms, such as ErosOS can. [/quote]

    It’s not clear that this is true.

    [quote]Theres no reasaon why random downloaded executables should be able to do anything at all.[/quote]

    Because users need to download executables that can do anything at all.

    [quote]the act if installing such an executable should involve it requesting permission to use the resources and services it needs. [/quote]

    Which is completely unacceptable, which is why such platforms offer no solution. The permission can’t be granted on any automatic basis, so the OS must ask the user. If the OS asks the user one of two things will happen:

    they’ll say "no" to everything, and be left with a largely unusable machine

    they’ll say "yes" to everything, and be left with no effective security

  14. Dr Pizza says:

    oh bollocks.

    blogs are not message boards.

    blogs are not message boards.

    blogs are not message boards.

    blogs are not message boards.

    blogs are not message boards.

    blogs are not message boards.

    blogs are not message boards.

    blogs are not message boards.

    blogs are not message boards.

  15. Dr Pizza says:

    "In an ideal world open ports would have no influence on OS security, that is true, but then in an ideal world we wouldn’t have to worry about buffer overflows, page protection bits, or users running trojaned programs either. Obviously we don’t live in an ideal world. "

    We don’t have to live in an "ideal" world. We just have to live in a world where we stop accepting this lie that software is inevitably buggy and will inevitably have security holes. It’s just *not*true*.

    "Because of that, security must be addressed on multiple levels from multiple approaches. Buffer overflows may be "inexcusable laziness" but they still happen. So we reduce the risk by running behind firewalls, closing ports we don’t need to have open, etc., etc. Such an approach can also mitigate potentioal architectural "deficiences", such as the lack of page protection bits. It’s simply good and proper practice given that code isn’t perfect. "

    Code doesn’t have to be "perfect"–though it could and should be. I’m demanding a much lower threshold; "not readily exploitable". This means that you’ve actually got to validate input properly (so you ensure you don’t get a number that’s too big, or too little, or doesn’t match the amount of data, etc.), but since that’s what programmers are paid to do it’s a perfectly reasonable expectation.

  16. fuzu says:

    "We don’t have to live in an "ideal" world. We just have to live in a world where we stop accepting this lie that software is inevitably buggy and will inevitably have security holes. It’s just *not*true"

    Haha, how I wish this were true. But as long as software is written by humans, it will inevitably be buggy, because humans are buggy. We are flawed creatures, and thus our creations will be flawed.

    On a higher level, there is also the user aspect. Our devs have used certain "features" in software, only to find that two months down the line those "features" were deemed "bugs" by MS. A patch comes out to fix the "bugs" and suddenly our app breaks. To you, the coder, that bug was a serious security breach. To us, the users, that was a really useful thing you wrote in. From our perspective, the patch that broke our app was the real bug.

    I do agree with you that we should expect programmers to do more when it comes to fixing buffer overflows and other such errors. I just don’t raise those expectations so high as to believe software will never be buggy. And as such, I want a plethora of weapons: firewalls, port management, user controls, etc. that help in the fight.

  17. D. Brian Ellis says:

    Thanks for the quick post on this Cyrus. I’m planning on keeping my home machine (maybe turn it into a central media server is the current line of thinking) and my work development will stay in Windows. I think that the next computer purchase may well be an Apple though. Thanks for your 2Cents.

    Brian

  18. Damnit! I woke up this morning all ready with responses and Tom+Fuzu beat me too it.

    Hrmm… need something to write… ok… "what they said" 🙂

    "We don’t have to live in an "ideal" world. We just have to live in a world where we stop accepting this lie that software is inevitably buggy and will inevitably have security holes. It’s just *not*true*. "

    It may not be true that software is "inevitably" buggy. But it is true that software is buggy today and that it will continue to be buggy for the near-future. I know of no protection against the fact that buggy humans write code and end up with buggy programs. Currently in every system that I’ve used the path from the code I wrote to the code that is executed was entirely created by humans and every layer (my code, the compiler code, the linker, the OS, etc.) has it’s bugs. As such, it is essential to take steps to prevent the bugs that will crop up.

    When someone is able to figure out how to get programmers to write 100% bug free software, then I’ll change my tune. But remember, as I stated before, you can still have bug free software that is designed in such a way that it is overly dependent (once again) on a buggy human 🙂 As long as our users are buggy we have a problem, as long as the developers are buggy we have a problem. And reducing the number of vectors of attack is essential for dealing with the problems we have now.

  19. D. Brain: No problem.

  20. Hrm… That was supposed to be "D. Brian". But i think i like "D. Brain". Reminds me of that little guy who would always say "D plane boss, d plane!"

  21. Dr Pizza says:

    "Haha, how I wish this were true. But as long as software is written by humans, it will inevitably be buggy, because humans are buggy. "

    Horseshit.

    Are you telling me that developers can’t write a program that can (say) reliably print "Hello world!"? Add a couple of numbers? Draw a box on-screen? etc.?

    No doubt you’ll say "OK, you can write that kind of thing without any bugs, but then you’ll be dependent on compilers and libraries and the OS, and even if your program isn’t buggy, they will be".

    To which I’ll respond, "But the same response is true of them. Are you seriously telling me that an OS author can’t buglessly call an interrupt to write a number to the screen or call another interrupt to read from the keyboard?" (only a simple OS, for sure, but we should start simple).

    I suppose you’ll then respond, "OK, they can do that, too, but then they’ve got to rely on hardware, and even hardware is buggy".

    And, yet again, I’ll trot out the same response. "Are you really claiming that a hardware developer can’t combine some transistors to build a reliable, bug-free gate? That he can’t combine gates to build a reliable, bug-free adder? Memory? etc.?"

    Bugs are not inevitable. They are not unavoidable. We should not accept them as a matter of course, and the fact that they /are/ accepted is due to a massive lie that the software industry has invented and perpetuated. It’s just not true. End users should demand better; managers should demand better; developers themselves should demand better. Bug-free programs /are/ possible, and anyone who says otherwise is a liar.

  22. D. Brian Ellis says:

    Many many days of grade school nicknames revolving around "Brain" in my past. Thanks for dredging up those horrible memories…Perhaps you’d like to kick my dog while you’re here…sniff..sniff. My friends and I always did like that cartoon though:

    Friend: "What are we going to do tonight, Brain?"

    Me: "The same thing we do every night bonehead friend of mine…"

    Both: "TRY TO TAKE OVER THE WORLD!"

    Brian

  23. "Are you telling me that developers can’t write a program that can (say) reliably print "Hello world!"?"

    Dr.Pizza: Interestingly enough. In almost all cases that I’ve seen, someone attempting to do just that tends to fall short.

    It takes an incredible amount of care and vigor to write even the simplest thing correctly. I don’t accept your logic.

    It is possible a person might be able to write one line of code correctly, and 1 million lines of code might be gotten by a person starting with one perfect line and just adding one perfect line after perfect line. However, it still entails the person making no mistakes anywhere across the way.

    "Bugs are not inevitable. They are not unavoidable. We should not accept them as a matter of course, and the fact that they /are/ accepted is due to a massive lie that the software industry has invented and perpetuated. It’s just not true"

    It’s not a lie that the software industry has perpetuated. It’s something that humanity has perpetuated. So far i’ve never seen a single thing in my life that involves human be bug free. Mathematics, which has the potential because of the enormous amount of rigor that normally goes into it has the potential to get there but it moves at a glacial pace and suffers from teh fact that even there people make mistakes.

    If you can show me a way to ensure that a person writing software makes no mistakes whatsoever, I might believe you. But I’ve never seen such a way. And, as such, I feel that it’s not a lie. I’ve never seen any person write bug free software. I’ve never seen any person write a system that ensures that bug free software will be written.

    However, this is somewhat irrelevant. While we talk about ideals and "what might be", the actual fact of the matter is that software todays does have bugs and these bugs do affect the security of a system. Mitigating attacks against our current systems is worthwhile.

    Yes, it’s fine to say "they architecturally equivalent". However, OSX and Windows aren’t architectures or designs. They’re actual implementations produced by thousands of flawed, buggy programmers. Because of that fact, one needs to look at the policies and safeguards put into place and factor them in when deciding which system is safer.

    I welcome the day that all software is bug free and all users make no mistakes. That day is not now and it will not be here for quite a while. Based on these facts I stand by initial guidance.

  24. fuzu says:

    Dr Pizza: if you want to go to the hardware level, you should know that transistors are not bug free. Same with memory. You simply can’t (at least now, and most likely not in the future) gaurantee that each piece of hardware produced and shipped will be flawless. Ask anybody who has managed or operated an assembly line: in the long run, there’s no such thing as perfect output. QA might have a day, a week, even a month, where no product flaws were detected, but turn that into years of manufacturing and you’ll always find some percentage that had to be thrown out. Mechnical parts flip out all the time and at random – you simply can’t control it.

    That being said, if you can find a way to thwart quantum mechnics and thermodynamics so nothing buggy ever occurs from the basic atoms of any raw material that goes into hardware production, please let me know so we can make bajillions of dollars.

  25. Dr Pizza says:

    "Dr.Pizza: Interestingly enough. In almost all cases that I’ve seen, someone attempting to do just that tends to fall short. "

    Then I have no idea what you’ve seen; it’s certainly not true of the Hello Worlds I’ve seen (or authored).

  26. Dr. Pizza: It’s usually a case of a user not checking and responding to all the invariants of whatever print function they are calling. For example, in languages with return codes, not checking the return value.

    The point was not that it’s impossible, the point was that the amoutn of rigor necessary is far beyond what mot people are capable of.

    There is also the issue of dependencies. Say I make a perfect call into an existing function. Then later, that function changes in some way where the code still compiles but is no longer correct anymore. i.e. a breaking change happened. I’m not omniscient and I don’t realize that I need to fix up that code. Maybe I’ll be lucky and a test will catch it. But maybe the change only pops up very rarely and we never see it until much much later.

    Again, this only works if every person designs a perfect system and then that system gets implemented perfectly in code. So far no one has ever shown that to be possible in any endeavor whatsoever.

    Regardless, again, this doesn’t change the current state of affairs. Code is not perfect and we should do what we can to help keep systems secure.

    I want to also add another perspective. Even if code were perfect, the cost might be too large. Code doesn’t usually exist in a vacuum. It exists because some customer wants it. Many customers might refuse to pay that premium and would be willing to accept bugs given the lower cost.

    Now, of course, I have no idea if the cost would be more (maybe it would be less!). However, as I’ve stated, no one has shown how to write bug free software yet. If it were cheaper then I would imagine that it would take off. Imagine the company that could actually pull that off. As fuzu said, it would "make bajillions of dollars" on that very fact alone. Why would anyone ever choose anything else when there was a cheaper system out there that was bug free?

    It’s not logical to conclude that it can’t be done, but it is reasonable to assume that it is very hard. (note: my assumption is due to my experience in this area and my complete innability to do anything perfectly).

  27. DrPizza says:

    "As fuzu said, it would "make bajillions of dollars" on that very fact alone. Why would anyone ever choose anything else when there was a cheaper system out there that was bug free? "

    People have demonstrated that buggy is "good enough", in part because they’ve started to believe the lie, so what incentive is there to do anything better?

  28. Dr. Pizza:

    a) To make money.

    b) To prove everyone is wrong

  29. fuzu says:

    The incentive is the "bajillions of dollars." Why else does anyone ever try to improve any consumer product? Like Cyrus said, if you make bug-free software and sell it at a reasonable price, you could rake it in (assuming you had a competent marketing team). If I had a choice between buggy software and bug-free software at the same rate, I would quite obviously choose the bug-free version. The odds of such a choice ever being offered to me, however, are pretty damn slim. It would require ensuring that randomness in matter never occurs again – but more power to you if you figure that out.

    At any rate, in the end, it boils down to the fact that I have never seen evidence that 100% bug-free software can be created, and as in all things, I believe the negative until the positive has been proven. It’s up to people like DrPizza to provide that proof.

  30. fuzu says:

    Cyrus. Stop. Reading. My. Mind.

    This has been kind of creepy…I’m going to stay away from this thread now…

  31. damien morton says:

    Considering that the software we build is built upon components and operating systems, and bios-es and hardware made by people we will never meet or engage with on any level than a binary like-it-or-leave-it basis, even if you, personally, wrote bug free code, the chances that the operation of your code will be 100% bug-free is incredibly small.

    I am reminded of Vernor Vinge in Deepness in the Sky, talking about the sought after profession of "programmer archeologist", whose job it is to sift through code libraries thousands of years old to find systems usefull for the problems of the day.

    I wont say its like that today, but programming today involves working on top of a number of layers that werent there 20 years ago. Thats phenomenon is only going to continue as the history of programming continues. The opportunities for wiping the slate clean get slimmer and slimmer.

    Life is messy. Code is messy. If you can write code that can operate reliably in a chaotic environment, that would be where the squillions of dollars are to be made.

    Then again, if you told someone their $500 computer was 90% reliable (one nine), and that they could buy extra nines for $50 a peice, how many do you think they would buy?

  32. Dr Pizza says:

    "The incentive is the "bajillions of dollars." "

    But there aren’t bajillions of dollars to be made. The assumption that people will switch to something just because it has fewer bugs is naive and not borne out in practice. They don’t. We see this every time a new worm does the rounds and exploits a long-fixed hole. People just don’t /care/ that much about bugs, and they’re not going to change their software–even in a really small way (such as installing a hotfix or service pack)–to reduce the number of bugs within it.

  33. Dr Pizza says:

    "Dr. Pizza:

    a) To make money. "

    None to be made. People don’t even switch to less buggy platforms when the cost to them is nil.

    "b) To prove everyone is wrong "

    It’s been done countless times. int main() { printf("Hello, World!n"); return 0; } is bug-free (it might not be good style, as it relies on implicit declaration, but that’s another matter entirely). It doesn’t check return values, but it doesn’t have to, as that doesn’t influence its correctness. The task is not "write a program that prints hello world even if the libraries go tits up and break"; the assumption is that the libraries are bug-free too. And why shouldn’t they be?

  34. fuzu says:

    "People just don’t /care/ that much about bugs, and they’re not going to change their software–even in a really small way (such as installing a hotfix or service pack)–to reduce the number of bugs within it."

    They don’t do this because hotfixes and service packs are known to carry their own bugs. This has happened to myself and my company several times. You download a security patch, and it breaks other parts of your system. Why install more bugs on top of the ones already there? Plus, if you have software from a company that is known for its security holes, what makes you think people will trust that company to fix the holes correctly?

    "the assumption is that the libraries are bug-free too. And why shouldn’t they be?"

    Because, quite simply, they can’t always be.

  35. Dr Pizza says:

    "They don’t do this because hotfixes and service packs are known to carry their own bugs."

    You give people way too much credit (the assumption that they care about such things is not reflected in reality), and in any case, the majority of such fixes reduce the total number of bugs, so represent an improvement, even if not perfection.

    "Because, quite simply, they can’t always be. "

    So you keep saying, but without any sound argument as to why this must be so.

  36. Dr. Pizza:

    "but that’s another matter entirely). It doesn’t check return values, but it doesn’t have to, as that doesn’t influence its correctness. The task is not "write a program that prints hello world even if the libraries go tits up and break"; the assumption is that the libraries are bug-free too. And why shouldn’t they be? "

    The library can be 100% bug free, and you can still have an error condition.

    If your task is "write a program that prints hello world sometimes and when it can’t silently fail" then you’ve succeeded. However, under that kind of situation I could claim anything is a success.

    Yes… my program works 99% of the time, and when it doesn’t, I just "catch (Exception)" and quit gracefully", so it’s ok.

    The issue is that you didn’t fully qualify what your program was. All you said was "Are you telling me that developers can’t write a program that can (say) reliably print "Hello world!"? ". The answer is only yes if you constrain the problem significantly. However, we have no good way of expressing those constraints (that i know of) and it’s never been clear how you combine all the constraints into a satisfaction problem that you can test acceptance on.

    In your case here, you’ve silently swallowed errors. Is that acceptable? Depends. I would say no. If someone uses your program they’re going to want to know if it actually printed or not. If you don’t provide that, then your program is literally irrelevant, because it takes in no input and isn’t guranteed to produce any output.

    So, in effect, all you’ve written is:

    int main() { return 0; }

    Now, that might be the base case of a bug free program. However, I’ve yet to see you prove your inductive step (that any bug free program can have a bug free line added to it to maintains it’s bug-freeity). You’ve yet to show how someone can determine that the line added is bug free.

    –"Because, quite simply, they can’t always be. "

    –So you keep saying, but without any sound argument as to why this must be so. "

    And you’ve never made a sound argument otherwise.

    As fuzu said before: " I believe the negative until the positive has been proven. It’s up to people like DrPizza to provide that proof."

    She’s not claiming that bug free software is impossible. She’s claiming that she skeptical given its dependence on flawed, failing, fatigued, sacks of flesh and because she’s never seen otherwise.

    I claim it’s not possible currently because we lack the tools and language to be able to express the necessary information to make things bug free. As it stands today a developer needs to understand an immensely large amount of information in order to just write anything and the interdependencies and intracacies are not always clear and explained. Thus, in order to make bug free software a developer needs to be omniscient.

    For example, say I start using a library you’ve created. Everything works great for a while and then you upgrade the library and give it new functionality. However, you forget to tell me that you’ve changed the contract of something, and my assumptions are now broken. You made a mistake by not documenting it… but even if you had, I probably would have missed the change unless I was paying attention incredibly closely.

    The issue I take with your argument is that while it is certainly logically valid, you haevn’t addressed how it is possible for us to do with finite cost. I’m not omniscient and people do make mistakes. I don’t see how it’s possible to write this bug free software while both are true.

    Do you have some mechanism for ensuring that every line added to the perfect: "int main() { return 0; }" is in itself perfect? A way of expressing every constraint and a way of checking it with an infallible system? Personally, I don’t think we could leave it to a human to manually check since i think it would be too hard. That leaves it up to a computer. But now this seems to get into the realm of program verification.

    Back when we studied Hoare Triples we concluded a few things. First, writing a total specification is incredibly hard. And, as a human does it, error prone. Second, proving that your program meets that specification. In a turing complete environment all that we were able to find was heuristics to test that were sometimes successful but not always. (See work by Pamela Zave on this).

    Note: In response to "To prove everyone is wrong" you say "It’s been done countless times. … is bug-free". Again, i posit that this is merely a base case. There isn’t a CS researcher out there that wouldn’t love to prove everyone wrong. Not only to show that it was possible and gain noteriety and grants for life.

    Also: Even with the state of hte art in software verification, you always have a human involved. Even in the step where you write the formal spec. That formal spec then is open to errors which can then show up as errors in the implementation.

    Finally: As we’ve been saying, the cost here is enormous. I’m not saying it can’t be done, what I’m saying is that it won’t be done in its current state. No one is willing to pay the enormous cost necessary to produce bug free software, and importantly, this goes back to the original topic which is that currently software is buggy, and design/policy choices must be made to mitigate that. If you can prove that this is possible (cost included), then, as fuzu said, I’ll quit my job now to work with you to get this done.

    If it turns out you’re right, hurrah. If you can get bug free software to be norm so that we don’t need these safeguards in place. Terrific. However, none of that is currently helpful here because we’re in the state we’re in and it doesn’t look like we’re changing any time soon.

    Soooo, back to the original topic which we’ve been waylayed on…

    "Finally, I have to say that it’s nice to be using an OS that I feel safer under than with Windows. A lot of what Apple did with their OS would map directly onto core tenets in Trustworthy Computing, things that I don’t feel consumer Windows users will see until XP SP2 is released (years after we announced TWC and years after OSX has already been demonstrating its value). I think that in that regard they’ve been incredibly responsible and they deserve kudos. I feel pretty safe on windows, and I’m usually very responsible. I run behind a firewall, get updates, and run a virus scanner. But even still, there is a lot of exposed surface that I would prefer to be sealed."

    I still stand by that assertion, and absolutely nothing I’ve seen so far makes me feel any different about it. Am I upset that we’re not in an ideal development world? Yes. But given that that’s the actual fact of the matter, we do what we can to deal with it.

  37. Dr Pizza says:

    "In your case here, you’ve silently swallowed errors. Is that acceptable? Depends. I would say no."

    It is acceptable because the hardware and supporting infrastructure are all bug-free too.

  38. Dr Pizza says:

    "Again, i posit that this is merely a base case."

    To refute the claim that it’s impossible, that’s all it has to be.

  39. Dr Pizza says:

    "Do you have some mechanism for ensuring that every line added to the perfect"

    No, I don’t, but I don’t need it to be right.

  40. Dr. Pizza: In this case, we don’t actually really care if you’re right as it boils down to a silly matter of baited semantics.

    Fine. I’ll stand down. A single instance of software can be bug free. However, the original claim was:

    "We don’t have to live in an "ideal" world. We just have to live in a world where we stop accepting this lie that software is inevitably buggy and will inevitably have security holes. It’s just *not*true*. "

    You do not prove that by saying "it’s true because i can write ‘int main() { return 0; }’" When we refer to software we are referring to software at least as capable of what exists today. Again, the burden of proof is on you to show how you can go from the base case to the software we have today.

    Remember, the claim is not "that software is inevitably buggy", the claim has been "no one knows how to create it otherwise, especially in a cost effective manner". AFAIK that is not a lie.

  41. Dr.Pizza: "To refute the claim that it’s impossible, that’s all it has to be. "

    You didn’t claim that it was possible. Specifically, you said:

    "We just have to live in a world where we stop accepting this lie that software is inevitably buggy"

    Ok. I’m going to break that as how I understood it.

    "software is inevitably buggy" is a lie. Therefor the truth is "buggy software is avoidable"

    This was spoken in the context of statements concerning OSX and windows security. So the statement is now:

    "buggy operating systems are avoidable"

    Your proof of this was a single base case of a bug-free program. However, this does not sufficiently meet the burden made by your original statement in the context of this discussion.

    If you wanted to make a claim free from all of what we’ve been discussin here, you are free to do that (prefereably in new thread so that this one won’t be hijacked), but to pretend that statements like that would be treated in a vacuum is disingenuous.

    We’re having a discussion about the necessity of levels of protection for the user given the current state of computing. You saying that that state need not exist and is a "lie". But again, you haven’t shown how we can reach an equivalent level of functionality in a bug free manner.