≡ Menu
Pawel Brodzinski on Software Project Management

Developers Should Work on Crappy Machines

At the moment my Firefox uses more than 250MB of RAM. Today’s peak was at 470MB. Simultaneously Internet Explorer (we use Share Point and Share Point in Firefox sucks) eats more than 150MB and with each tab I open it grabs another 25MB.

What the hell? What these applications do with all the memory? Some kind of temporary public distributed storage where they compute how to rule the world or something?

If I restart both browsers and they reopen all tabs Firefox needs less than 120MB and Internet Explorer needs less than 100MB of RAM. What did they use the rest for before restart? I guess I already asked but what the hell?

Believe me browsers aren’t only applications which suck in terms of memory usage. TweetDeck? 90MB reserved instantly after starting. And that’s for an application which pretty much works as RSS reader. MS Outlook? 80MB to 100MB after few whiles. Live Messenger? 60MB just after start just to log me in and display contact list. Wow. Or should I ask what the hell?

Developers of each of these applications don’t give a damn about memory they use on client machines. They allocate loads of memory whether they need it or not. Thus they should be punished.

As a punishment they should work on crappy machines in terms of available RAM (and arguably processor power). This way they would suffer each time they had to check anything in working application. Developers are pretty smart beast. They’d get the thing.

So slow. Oh so slow. Why a swap file is used so extensively? Hm, my machine run out of RAM, that’s why. Maybe the app is allocating too much memory? Maybe I should do some refactoring to show them what The Real Hacker can do when he’s pissed off because of his too slow PC…

I don’t advise developers should get 1024×576 displays which would make minimal screen resolutions fine in virtually every application. I’m not sadist. Not to that level at least. Let them have their fancy 22’ screens (or whatever they get these days). However exchanging their machines to some old crap would make users’ world nicer since developers would share our pain in the ass. It’s so humane, isn’t it?

When they learn to care about memory usage they can get back their super-duper PCs back which is a carrot complementary for a stick.

in: software development, user experience

19 comments… add one

  • johnfmoore April 7, 2009, 2:16 pm


    While I agree with you that people sometimes lose sight of what machines end-users will be running on I think you’ve got the wrong solution. Of course, as a head of engineering, you probably did not expect me to agree with your solution. :-)
    Client applications need to have well-defined targets for minimum machine configurations and the engineering team must be held accountable for meeting these minimum requirements. Too often trades are made for features in place of meeting these performance requirements.
    I guess you could argue I want the best of both worlds. I want the nice machine but I also want to see people held accountable for delivering quality applications.


  • Pawel Brodzinski April 7, 2009, 2:41 pm

    Actually I guess no one from engineering would be willing to agree. Something I expected though.

    The question is: what were non-functional requirements telling about memory usage for mentioned applications (Firefox, IE and TweetDeck being my favorites here)?

    Or better question is: were there any?

    Or should we ask: did anyone cared as soon as they were written down?

    If a couple of browsers can eat up all available memory on 1GB Windows XP based machine or on 2GB Windows Vista based machine that’s utterly wrong, no matter what functional tradeoffs were made during the process.

    Another thing is why restarting browsers gets you back almost a half of allocated memory. Wasn’t needed I guess, which is pure sloppiness. TweetDeck takes it all each time, but I’m not sure which is better.

    When my applications works flawlessly I don’t care what their developers work on. Unfortunately sometimes my applications sucks which makes me having this kind of ideas.

  • blorq April 7, 2009, 11:12 pm

    the new DDR3 based-machines all seem to have 6 ram slots (or 12 if you are lucky enough to have a real workstation on the xeon 5500 series) and 12 gigs of ram is cheap.

  • Martin Clarke April 8, 2009, 12:09 am

    Pawel, your solution is going to result in your software being late. And your developers being grumpy. I think you’re exaggerating away from plausible solutions. For example, ensure that QA are looking at the app on a min-spec machine and providing feedback to the devs early and often.

    An actual practical alternative would be to get every developer setup with virtual machines, that way you can tune it more or less to your min spec machine. Only issue with this is if you’re doing something with 3D graphics virtualisation could get a bit hairy.

    As to your point in the comments, was the memory footprint defined? No, I’d say probably not. But who would define it and how precise can we really be about memory before we start? We can only really go on what our previous experiences tell us.

    As a user what you’re saying about FireFox, IE and TweetDeck seems perfectly reasonable – they’re too memory hungry! Putting on my engineering hat, I could imagine in FireFox and IE that items in cache might have been loaded into memory. Regarless if thats true or not, the devs are making judgement calls and they need to be trusted to do the right thing and make engineering judgements.

    The more feedback on performance to devs during the build phase the better.

  • Pawel Brodzinski April 8, 2009, 12:21 am


    If that’s a reason for not caring about RAM consumption at all, well, I wouldn’t like developers thinking this way in my team. Sorry.

    If you consider typical user machine you should think about:
    – A PC/notebook which stands in the office, thus user doesn’t have control over hardware (they won’t go buy new motherboard and fill all slots with RAM modules.
    – A PC/notebook which is aging a bit but no one’s willing to change it for a new one as far as the old one works for its purposes
    A PC/notebook which is actually crappy

    Now of course buying single machine isn’t extremely expensive but changing hundreds of them in whole company just because developers did louse work isn’t something which I’d happily do.

    Another, even more important, argument is considering how the application would work when used in many threads or under heavy load. I’ve seen some of “browser-developers-approach” in applications which were working on high-end servers but were facing heavy load. The end was pretty tragic. The main reason was that developers did not care about memory usage which was just disappearing in a few days and then the application was either crashing or rejecting to process any more events.

    Do you still think you don’t have to give a damn only because RAM is cheap?

  • Pawel Brodzinski April 8, 2009, 12:39 am


    Of course I’m exaggerating away from plausible solutions. Anyway the problem is real. And it’s not addressed in surprisingly many software development teams.

    If non-functional requirements connected with memory usage was made, fulfilled and verified consciously there would probably be no problem at all. Unfortunately they are not. And we aren’t talking about some garage company built by a couple of graduates but about probably two most popular applications in the world (browsers), the most popular office platform (MS Office) or the leading application working on the most trendy social platform (TweetDeck; in this case it isn’t very mature yet, so I don’t expect such high standards here).

    Something is screwed in end-to-end software development process if people fail to do memory management reasonably. And yes, the proper way of dealing with the problem would be good non-functional requirements management. It doesn’t seem to work in many places though.

    By the way I can guess why and when browsers allocate large chunks of memory – especially heavy pages loaded with javascript seem to do much harm here. However analysis of what’s loaded to the memory would show how much information which is irrelevant anymore or the same information a couple of times is cached.

  • Petros April 8, 2009, 5:01 am

    Last year I wrote a post on my blog claiming the opposite. Of course I am a programmer, and I biased. I just give you the link to read my arguments:

    Should programmers have a fast PC?

  • Pawel Brodzinski April 8, 2009, 5:53 am


    Don’t take it too seriously. I just try to cause a stir. As I’ve stated in previous comments developers do crappy job with resource management. Sometimes because they don’t care, sometimes because they aren’t aware. Either way they’d be forced to notice the problem while working on some crap trying to imitate developer station.

    Now I admit that’s exaggerated conclusion (it was intended that way) and it would bring more than just awareness of limited resources.

    I also agree that there are better options to deal with the problem (proper non-functional requirements management, serious performance testing). Unfortunately they’re often omitted.

    For some reason no one argues that given examples do good job in terms of memory usage (by the way I bet their developers work on pretty strong machines). In terms of requirements management and testing we probably should expect much from vendors of these apps. Yet somehow they suck in resource management. Why?

    I guess more discussion we have and more awareness of the problem among developers the better. I’m glad to bring it on the table.

  • Paul Marculescu April 8, 2009, 6:02 am

    Hehe, I remember about 5 years ago I was working with 2 friends on an spam detection desktop application.

    On startup, the program initialized a matrix used for searching, containing a few thousand words.

    I remember I had the slowest computer amongst us, a 400MHz Celeron. On my colleagues’ machines, the initialization took less than 1 seconds. On my machine, I was looking at the splash screen for about 30-40 seconds.

    It got so annoyed that I had to fix it. Turned out it was a sequence of unnecessary malloc calls. I replaced them with a single one for the whole size of memory needed and the splash screen disappeared immediately. :)

  • Petros April 8, 2009, 6:17 am

    Pawel, I didn’t want to sound as taking it so seriously. I know my post comes out a little bit like that, but it was written a year ago immediately after one of my managers suggested changing to slower computers as a solution to the problem you mention in your post. I wanted to blow some steam that day, and that’s why my post is rather harsh.

    Of course, no matter what the solution is, we both agree that there is a problem with memory hungry, unoptimized, slow software that tends to be the rule nowadays.

  • Pawel Brodzinski April 8, 2009, 6:18 am


    That’s exactly a trick. You were aware of the issue so you did something about it. Other way it’ll be there until your user would start calling you to fix it or something would advise in his rant to buy all of you crappy machines to work on.

  • Pawel Brodzinski April 8, 2009, 6:23 am


    I guess I’d need to blow some steam too if someone seriously told me I should switch to worse machine than I already have.

    Anyway next time I’m going to cut at least resources for testing environment by a half, no matter what developers would request…

  • Meade April 8, 2009, 2:04 pm

    I agree with this approach – and years ago in C/S times it was a standard to ensure what was being developed worked for the clients. If a programmer is not ‘forced’ to work in the same environment as what the tool will be used in how will they ever experience the same results. Giving them virtual or second machines means more resources being ignored by the programmer because they don’t have time (gee…I don’t have time to test – that’s QA’s job)…I’m all for it.

  • Pawel Brodzinski April 8, 2009, 4:02 pm

    It’s all about being aware. If developers think about resources they use every time they add a line in the code that’s fine.

    Unfortunately you’ll meet quite a lot of developers who expect they’ll have all processing power and all memory of the world available for their application. “RAM and processors are cheap these days they’d say.

    Well, maybe they are but it doesn’t automatically change computers your app will be working on. Share the pain of your users. Or at least try.

  • Paul April 8, 2009, 7:04 pm

    Developers need good machines because they cost a lot of money and shouldn’t be spending a lot of time waiting for builds.

    The importance of memory usage is something that is decided at the project as well as at the developer level. I can see how much memory my computer is using if I have 64m or 64g so why waste developers time with a slow machine.

    If a developer hasn’t handled memory usage well then this should be picked up in testing and the code bounced back to the developer. Memory usage should be on the developers list of tests.

  • Anonymous April 8, 2009, 9:30 pm

    Hey Dude, I totally agree with you about resource hog applications, but…Part of the problem is the tools we’re given. .NET applications are often enormous. Especially the WPF garbage. The equivalent of “hello world” in a WPF client takes about 20 megabytes or more to run.

    People (managers) want software fast. They buy into the latest technology BS, because they think it will make better applications in less time (it won’t.)

    Problem is, the new new stuff always uses more memory and resources than the last.

    If you want a lean app, ask for it to be written against the plain Win32 APIs in C or C++.

    But then again, there aren’t many real programmers left who know how.. So you’re going to be stuck with this situation.

    Just my two cents

  • Pawel Brodzinski April 9, 2009, 2:43 am


    I agree that “If a developer hasn’t handled memory usage well then this should be picked up in testing and the code bounced back to the developer. Memory usage should be on the developers list of tests.

    The real problem is that most of the time it’s neither picked up during testing nor put on developers list of tests. I know everyone would find a number of reasons why it isn’t done in given situation and it will be oh so understandable. Except that’s looking for excuses for building crappy apps.

    I assure you I know why it’s better when developers have stations with much processing power and you should treat the idea as it was thrown with a tongue in my cheek but the problem is real. Acknowledgement that developers and/or quality engineers should do something about it doesn’t move us any further.

  • robertkoguciuk April 22, 2009, 3:25 pm

    Recalled ideas from my talks to fellow developers back xxx years ago:

    “with this new programming language and its virtual machine memory leaks are now a thing of the past” (same goes for so called “Unexpected Application Errors”)

    “well, you get short of memory, buy more RAM, it’s so cheap” or “your app is too slow, buy faster CPU”

    isn’t computer history evolving in cycles? unfortunately, our competitors also have access to those faster CPUs and cheap RAM, don’t they?

    the real question is: how to stay rid of memory leaks and still within budget. High performance and leak free systems do typically cost a little bit more to produce. If the management does take extra testing and disciplined programming process into account when preparing budget, will they persuade end customer to adopt extra cost? Will they meet with their applause?

    That’s the bottom line. Prices must go down. At least a little bit. Companies must adopt tight budgets. Otherwise they turn less competitive. Those tight budgets must exclude something, that is not absolutely necessary, ughh… like some testing or code reviews … and there we go… ram consumption growing

    There are a few niche markets where they do actually create Software Requirements Documents with Performance Requirements and Resource Consumption sections included. They do because, if they did not, for instance human life could be endangered or their world’s competitor could perform same thing a little bit faster or more efficient. I did have an honour to work for such a business.

    All less critical apps (including browsers or telephony gadgets) have to live with occasional reset, I guess.

  • Pawel Brodzinski April 22, 2009, 11:30 pm


    Of course it’s all about end customer at the end of the day. And of course questions you bring asking whether customer is willing to pay for extra quality are valid.

    Answers however aren’t simple. I neither have paid for Firefox nor for TweetDeck and it isn’t likely to change.

    The only power I have is I can start using Internet Explorer or Chrome and check Twitter on the web instead. Actually I switched to Chrome but it still sucks in terms of usability so I’m back with Firefox.

    It’s not always about money customer is willing to pay for application – it’s the question about whole business model.

    Usually in the long run better applications win if competitors can put similar effort into marketing and/or sales. Sometimes better apps win despite of lack of strong marketing/sales.

    The question is about a way you choose – how many compromises you can accept? Is compromising on quality one of them? Do you care about resources or “they’ll buy more RAM and new CPUs?”

    I won’t give you right answer. They depend on situation of your product and your company.

    Either way 500MB of RAM for internet browser is an exaggeration.

Leave a Comment