20 controversial programming opinions

August 29, 2012 by . 109 comments

Post to Twitter Post to Facebook Post to Reddit Post to LinkedIn Post to StumbleUpon Post to Digg Post to Delicious Post to Technorati

One of the very first ideas we had for this blog was to convert some of the wonderful gems of the early era of our site, the undisciplined period, to blog posts. Questions that were once enthusiastically received by the community, but no longer fit Programmer’s scope.

The first deleted question I’ve chosen is Jon Skeet’s “What’s your most controversial programming opinion?” question (available only to 10K+ users, sorry), a +391 scored question that was originally asked on Stack Overflow on January 2, 2009. What follows are twenty of the highest voted answers, in random order…

  1. Programmers who don’t code in their spare time for fun will never become as good as those that do.

      I think even the smartest and most talented people will never become truly good programmers unless they treat it as more than a job. Meaning that they do little projects on the side, or just mess with lots of different languages and ideas in their spare time.

    by rustyshelf

  2. Unit testing won’t help you write good code.

      The only reason to have Unit tests is to make sure that code that already works doesn’t break. Writing tests first, or writing code to the tests is ridiculous. If you write to the tests before the code, you won’t even know what the edge cases are. You could have code that passes the tests but still fails in unforeseen circumstances. And furthermore, good developers will keep cohesion low, which will make the addition of new code unlikely to cause problems with existing stuff.

    by Chad Okere

  3. The only “best practice” you should be using all the time is “Use Your Brain”.

      Too many people jumping on too many bandwagons and trying to force methods, patterns, frameworks, etc. onto things that don’t warrant them. Just because something is new, or because someone respected has an opinion, doesn’t mean it fits all.

    by Steven Robbins

     
  4. Most comments in code are in fact a pernicious form of code duplication.

      We spend most of our time maintaining code written by others (or ourselves) and poor, incorrect, outdated, misleading comments must be near the top of the list of most annoying artifacts in code. I think eventually many people just blank them out, especially those flowerbox monstrosities. Much better to concentrate on making the code readable, refactoring as necessary, and minimising idioms and quirkiness. On the other hand, many courses teach that comments are very nearly more important than the code itself, leading to the this next line adds one to invoiceTotal style of commenting.

    by Ed Guiness

  5. “Googling it” is okay!

      Yes, I know it offends some people out there that their years of intense memorization and/or glorious stacks of programming books are starting to fall by the wayside to a resource that anyone can access within seconds, but you shouldn’t hold that against people that use it. Too often I hear googling answers to problems the result of criticism, and it really is without sense. First of all, it must be conceded that everyone needs materials to reference. You don’t know everything and you will need to look things up. Conceding that, does it really matter where you got the information? Does it matter if you looked it up in a book, looked it up on Google, or heard it from a talking frog that you hallucinated? No. A right answer is a right answer. What is important is that you understand the material, use it as the means to an end of a successful programming solution, and the client/your employer is happy with the results.

    by PhoenixRedeemer

  6. Not all programmers are created equal.

      Quite often managers think that DeveloperA == DeveloperB simply because they have same level of experience and so on. In actual fact, the performance of one developer can be 10x or even 100x that of another. It’s politically risky to talk about it, but sometimes I feel like pointing out that, even though several team members may appear to be of equal skill, it’s not always the case. I have even seen cases where lead developers were ‘beyond hope’ and junior devs did all the actual work – I made sure they got the credit, though.

    by Dmitri Nesteruk

  7. I fail to understand why people think that Java is absolutely the best “first” programming language to be taught in universities.

      For one, I believe that first programming language should be such that it highlights the need to learn control flow and variables, not objects and syntax. For another, I believe that people who have not had experience in debugging memory leaks in C / C++ cannot fully appreciate what Java brings to the table. Also the natural progression should be from “how can I do this” to “how can I find the library which does that” and not the other way round.

    by Learning

  8. If you only know one language, no matter how well you know it, you’re not a great programmer.

      There seems to be an attitude that says once you’re really good at C# or Java or whatever other language you started out learning then that’s all you need. I don’t believe it- every language I have ever learned has taught me something new about programming that I have been able to bring back into my work with all the others. I think that anyone who restricts themselves to one language will never be as good as they could be. It also indicates to me a certain lack of inquistiveness and willingness to experiment that doesn’t necessarily tally with the qualities I would expect to find in a really good programmer.

    by glenatron

  9. It’s OK to write garbage code once in a while.

      Sometimes a quick and dirty piece of garbage code is all that is needed to fulfill a particular task. Patterns, ORMs, SRP, whatever… Throw up a console or web application, write some inline SQL (feels good), and blast out the requirement.

    by jfar

  10. Print statements are a valid way to debug code.

      I believe it is perfectly fine to debug your code by littering it with System.out.println (or whatever print statement works for your language). Often, this can be quicker than debugging, and you can compare printed outputs against other runs of the app. Just make sure to remove the print statements when you go to production (or better, turn them into logging statements).

    by David

  11. Your job is to put yourself out of work.

      When you’re writing software for your employer, any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort. It is well designed, clearly and consistently written, formatted cleanly, documented where it needs to be, builds daily as expected, checked into the repository, and appropriately versioned. If you get hit by a bus, laid off, fired, or walk off the job, your employer should be able to replace you on a moment’s notice, and the next guy could step into your role, pick up your code and be up and running within a week tops. If he or she can’t do that, then you’ve failed miserably. Interestingly, I’ve found that having that goal has made me more valuable to my employers. The more I strive to be disposable, the more valuable I become to them.

    by Mike Hofer

  12. Getters and Setters are highly overused.

      I’ve seen millions of people claiming that public fields are evil, so they make them private and provide getters and setters for all of them. I believe this is almost identical to making the fields public, maybe a bit different if you’re using threads (but generally is not the case) or if your accessors have business/presentation logic (something ‘strange’ at least). I’m not in favor of public fields, but against making a getter/setter (or Property) for everyone of them, and then claiming that doing that is encapsulation or information hiding… ha!

    by Pablo Fernandez

  13. SQL is code. Treat it as such.

      That is, just like your C#, Java, or other favorite object/procedure language, develop a formatting style that is readable and maintainable. I hate when I see sloppy free-formatted SQL code. If you scream when you see both styles of curly braces on a page, why or why don’t you scream when you see free formatted SQL or SQL that obscures or obfuscates the JOIN condition?

    by MustStayAnonymous

  14. UML diagrams are highly overrated.

      Of course there are useful diagrams e.g. class diagram for the Composite Pattern, but many UML diagrams have absolutely no value.

    by Ludwig Wensauer

  15. Readability is the most important aspect of your code.

      Even more so than correctness. If it’s readable, it’s easy to fix. It’s also easy to optimize, easy to change, easy to understand. And hopefully other developers can learn something from it too.

    by Craig P. Motlin

  16. XML is highly overrated.

      I think too many jump onto the XML bandwagon before using their brains… XML for web stuff is great, as it’s designed for it. Otherwise I think some problem definition and design thoughts should preempt any decision to use it.

    by Over Rated

  17. Software development is just a job.

      I enjoy software development a lot. I’ve written a blog for the last few years on the subject. I’ve spent enough time on here to have >5000 reputation points. And I work in a start-up doing typically 60 hour weeks for much less money than I could get as a contractor because the team is fantastic and the work is interesting. But in the grand scheme of things, it is just a job. It ranks in importance below many things such as family, my girlfriend, friends, happiness etc., and below other things I’d rather be doing if I had an unlimited supply of cash such as riding motorbikes, sailing yachts, or snowboarding. I think sometimes a lot of developers forget that developing is just something that allows us to have the more important things in life (and to have them by doing something we enjoy) rather than being the end goal in itself.

    by Greg Beech

  18. If you’re a developer, you should be able to write code.

      I did quite a bit of interviewing last year, and for my part of the interview I was supposed to test the way people thought, and how they implemented simple-to-moderate algorithms on a white board. I’d initially started out with questions like:
    Given that Pi can be estimated using the function 4 * (1 – 1/3 + 1/5 – 1/7 + …) with more terms giving greater accuracy, write a function that calculates Pi to an accuracy of 5 decimal places.
      It’s a problem that should make you think, but shouldn’t be out of reach to a seasoned developer (it can be answered in about 10 lines of C#). However, many of our (supposedly pre-screened by the agency) candidates couldn’t even begin to answer it, or even explain how they might go about answering it. So after a while I started asking simpler questions like:
    Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.
      Amazingly, more than half the candidates couldn’t write this function in any language (I can read most popular languages so I let them use any language of their choice, including pseudo-code). We had “C# developers” who could not write this function in C#. I was surprised by this. I had always thought that developers should be able to write code. It seems that, nowadays, this is a controversial opinion. Certainly it is amongst interview candidates!

    by Greg Beech

  19. Design patterns are hurting good design more than they’re helping it.

      Software design, especially good software design is far too varied to be meaningfully captured in patterns, especially in the small number of patterns people can actually remember – and they’re far too abstract for people to really remember more than a handful. So they’re not helping much. And on the other hand, far too many people become enamoured with the concept and try to apply patterns everywhere – usually, in the resulting code you can’t find the actual design between all the (completely meaningless) Singletons and Abstract Factories.

    by Michael Borgwardt

  20. Less code is better than more!

      If the users say “that’s it?”, and your work remains invisible, it’s done right. Glory can be found elsewhere.

    by Jas Panesar

What do you think? And more importantly, what’s your most controversial programming opinion?

Update

A few more controversial programming opinions:

Post to Twitter Post to Facebook Post to Reddit Post to LinkedIn Post to StumbleUpon Post to Digg Post to Delicious Post to Technorati

Filed under Deleted Questions

109 Comments

Subscribe to comments with RSS.

  • pip010 says:

    I see nothing controversial. I agree with almost all opinions and can testify, justify and certify the validity of them by personal experience and case studies! 🙂 Only: 9 – only if you are hacking and playing on your own! also if you are planning on refactoring shortly after! that is 2 takes approach to code writing!

    10- NO. although there are situations where you cannot easily debug! then you simply have no other options!

    17 – Not the case for me. I code cause I love to code. Otherwise, you might as well as become Lawer, right ?

    • Telanis says:

      “NO” isn’t much of an argument. What’s wrong with prints? You can produce a log that is basically equivalent to stepping through in a debugger and watching the important variables, except it’s produced as fast as the code can run and easier to read if you format it so. As noted in the answer logs are also easily compared.

    • Tracker1 says:

      I find it funny that #1 and #17 are pretty much the opposite. I also disagree with your take on #10… sometimes having a log/print statement is easier that stopping and debugging. I’ll do this a lot in JS, in the browser… simply monkey-patching a method, and re-running the logic, with logging in place. Since you can’t easily adjust a method, load the state in place and then debug in the browser, it’s often the path of least resistance. I find it to be the case with scripted environments more than static/compiled environments though.

    • voretaq7 says:

      I’m with Telanis and Tracker1 — Debugging prints are a perfectly valid way to debug code. In fact, given the choice between those or having to rebuild with breakpoints spaced through my flow control I would take the debug print every time.

      (Proper) Debug prints go to STDERR or its equivalent. In the case of most of the (web-based) code I work with they get dumped into the server log where I can review them, but users never know they’re happening. If I need to instrument something in production because it’s misbehaving that’s how I’ll do it. A hard break that requires me to attach a debugger and continue every request manually as code passes the breakpoints quickly becomes obvious to my users (and their displeasure obvious to my boss :-/)

    • Tomis says:

      @voretaq7 – debugging with print statements is perfectly valid way to debug code. Having said that, it’s generally retarded way to debug.

      If you are not limited in what your development tools are then you have no excuse to avoid using a XXI-st century IDE (such as Visual Studio, for instance). If you are using a good IDE and you are still debugging with print statements then either something is wrong with your IDE, or something is wrong with you, because placing a breakpoint and hitting F5 is certainly easier than writing ‘print(“blablabla”)’.

      Also, debugging in gdb (console mode) is just as retarded. I don’t want to get into the dirty details, suffice to say I’ve done all types of debugging for long enough to know that there’s no substitute for a good IDE.

      Steering your car with your feet is a perfectly valid way to drive; however, there are probably better ways.

    • Di-0xide says:

      9 – Or* playing (working) on your own.

      10 – Agreed, to a degree. Keep in mind exceptions cannot (directly) output values at runtime. Many IDE’s do not support watching of vars in-scope during a break, so print-esque functions would be required. However, there is nothing more annoying than “System.out.println(“Some sort of logging message here”);” littered all over code.

      The moral to print statements: use them only when necessary. Don’t write more print statements than there is code.

      17 – Not the case at all. I agree with #1 moreover than #17.

  • user1267362 says:

    I’ve been playing around with python at my day job for less than a year (hoping to get ito development) I found that pi one pretty easy to whip up in about 10 min (with the help of some coffee

    2 import math 3 4 5 def pi_finder(n): 6 “”” caclulate pi to n decimal places “”” 7 denominator = 3.0 8 toggle = 1 9 count = 1 10 accumulator = 0 11 while (count <= n): 12 accumulator = accumulator + (toggle* (1/denominator)) 13 denominator += 2 14 toggle = toggle * -1 15 count = count + 1 16 print(“in loop accumulator =>”, accumulator, 17 “toggle => “, toggle, 18 “count => “, count) 19 print(“accumulator: ” , accumulator) 20 pi = 4.0*(1-accumulator) 21 return pi

    takes extraordinarily big values of n to get accurate 5 decimals of precision. Am I doing it wrong?

    • user1267362 says:

      apparetly formatting doesn’t hold so I pasted it here

      http://pastebin.com/FGqJX7HT

    • Manbeardo says:

      And there you have encountered the biggest problem with that question. Unless you prove how and why that approximation works, the only way to know for certain that you’ve estimated to 5 digits is to compare it against a known value of Pi. That sort of defeats the point.

    • Zach Denton says:

      How about:

      print 4 * sum((-1)**(i) * (1.0 / (2*i + 1)) for i in range(500000))

    • Zach Denton says:

      Or if you want to emphasize the math:

      from itertools import * print 4 * sum((-1)i * (1.0 / (2i + 1)) for i in takewhile(lambda i: i == 0 or (2i)(-1)>0.000001, count(0)))

    • Deadsy says:

      That particular algorithm is simple but not efficient for calculating pi. The comment in your code is incorrect. It calculates n terms of that series, not n decimal places. From the algorithm you can see that the error at any term is around 1/n, so to get to 5 decimals places you need to be down to twiddling the 6th decimal place. ie 1/1000000 or n = 1000000. In general your code looks like newbie code 🙂 ie- correct, but a ponderous and overly verbose way of doing something that is quite simple. E.g:

      def calc_pi(n): pi = 0.0 for i in xrange(1, n, 4): pi += 4.0/float(i) pi -= 4.0/(float(i)+2.0) return pi

    • Tracker1 says:

      Personally, I think said exercise is probably best replaced by a constant. You can use recursion to calculate fairly well, but Math.PI is as accurate as you will get for the most part. From there… Function Area = r => Math.PI * Math.Pow(r,2);

    • Francesco says:

      The series given above is known to converge very slowly.

      In order to verify that the solution is correct up to 5 decimal digits you can: – take the difference between two consecutive approximations and check if their difference is < 0.1e-5 – use a theoretical error bound like the one given by the Libnitz theorem: http://en.wikipedia.org/wiki/Alternating_series this latter says that the error you are doing by truncating the series is, in the worst case, as big as the first rejected term (in absolute value).

    • Peter Jones says:

      finds pi to n decimal places

      def pi_finder(n):

      denom = 1

      estimate = 0.0

      # while target accuracy not yet achieved…

      while(4.0/abs(denom) > 0.5 / (10**n)):

      # update estimate and...

      estimate += 4.0/denom

      increase denominator by 2 and invert.

      denom = (denom + (2*abs(denom)/denom)) * -1

      return round(estimate, n)

    • Tom says:

      I don’t understand all the purported answers to this. What is wrong with:

      double GetPi() {
          return 3.14159;
      }
      

      ?

    • nickels says:

      @Tom hahahahh That’s the best answer by far. I hate little idiotic programming brain teasers. They demonstrate only how insecure and needing to prove oneself a person is, nothing more.

    • Ubuntaur says:

      Here’s my Python solution. It’s not the shortest, but I believe it’s quite readable (although it doesn’t check the approximation against the known value for pi).

      http://pastie.org/4617982

    • rkulla says:

      The PI coding opinion doesn’t say what he’d do if someone technically “coded” it but did it in a really ugly, brute-forced, stupid (like hard-codeding the answer to return the answer) and extremely slow way, such as the following PHP, though. Would you still hire after this monstrosity?:

       
      function calcPI() {
        $s='';
        for ($i=3; $i<1000000; $i += 2)  {
            $s .=  "1/$i " . (($i % 4 != 1) ? '+' : '-');
            $answer = eval("return 4 * (1 - $s 0);");
            if (substr($answer, 0, 7) == '3.14159') {
              echo "$answer\n";
              break; // wow kind enough to break even though this probably took 5 days to get this far. What an efficient programmer!
            }
         }
      }
      

    • tz says:

      The return 3.14159 might be the best, but what if you need to do other constants from other infinite series. I can’t think of a way to do this with a set of C macros so it wouldn’t take any CPU at run time (and test if the compiler breaks), but that would be the goal.

      The idea of the question is you have to think – if you are sampling some data, how do you insure the error is less than a given number?

      My quick-elimination question (before the interview) is/was to ask for a simple sort routine. No one got the correct answer (which is a question – “what do you want sorted”? since the best algorithm depends on the input – I got number sorting, but if I asked for strings?).

    • Dale King says:

      One thing that all of these solutions overlook is that you are using a computer with limits on its precision. To get the most accuracy when doing the approximation you need to do the computation the other way around and start with the smallest values (the largest denominator) and work back towards the 1. In Mathematics (i.e. with infinite precision) it makes no difference, but it does make a difference when using real computers that have limited precision.

      The hard part about doing it that way is knowing if that last term is a + or -. Easy way around that is just choose one of the two and take absolute value at the end to reverse it if you got it wrong.

    • ww says:

      I’m basically insecure so I also wanted to provide a solution:

      import operator
      def mypi(places=0.000001):
          lrh, rh = 1.0, 0.0
          n = 1
          funs = [operator.add, operator.sub]
          while abs(rh - lrh) > places:
              lrh = rh
              rh = funs[n & 2 == 2](rh, 1/float(n))
              n+=2
      return 4 * rh

    • Turkey says:

      C for the win

      double mypi(double precision){
          int n = 1;
          double lrh, rh = 0;
          do{
              lrh = rh;
              rh += ((-2 * (n++ & 2) >> 1) + 1) * 1/(float)n++;
          }while (fabs(rh - lrh) > precision);
          return 4 * rh;
      }
      

  • gotofritz says:

    like unit testing and TDD. it’s the best documentation you can write. This article is just linkbait IMHO

    • Steve314 says:

      I like unit tests a lot myself, and sometimes write them before the code, but very far from always.

      One thing is this article which questions a common so-called proof that TDD is effective. And if TDD really is that great, why do the advocates need to misrepresent it.

      But specifically on unit tests as documentation – I find my unit tests are too unreadable to use that way. For example, if I have data to describe a digraph which a function is going to act on, that’s a couple of lists of vertex and edge descriptions – not a diagram. If the code being tested is an Is_Deterministic method, I need to analyse that description to be sure of whether the digraph it represents is deterministic or, if not, why not. Comments, yes, but as another point here states, comments are often out of date. Besides, the comment isn’t the test – it’s additional documentation.

      On the point made here about not knowing the edge cases, I find that relevant too, but far from an absolute. Testing to get code coverage is one thing (which I sometimes do, but sometimes don’t – it depends how confident I am in the code). But depending on how you implement the functionality, some important decisions can get obscured, so code coverage isn’t the same as requirement coverage. Testing for requirements is more important that testing for code coverage. That being the case, knowing the edge cases in the implementation (rather than the requirements) can even be a distraction.

      I think the most important thing, though, is that there is no magic bullet – you budget your time to get the job done as best you can, and don’t obsess about unit tests or any other one thing.

      For example, I recently had a bug that took me a couple of days to trace. It turned out that the complex code that I had already unit tested (but suspected the tests) was fine, and the other complex code that I added unit tests for was also fine. I started suspecting memory corruption though I haven’t had a case of that for a lot of years, but again, that wasn’t it.

      The problem was in a single-line function that consisted of a single method call. Because it was so simple, it was so obviously correct that I was blind to the possibility it was wrong. But I was calling the wrong method.

      The difference between the two methods – the one I was calling and the one I should have been calling – was quite subtle, but in this case extremely important. The names were different to make clear the difference, but you still have to be thinking about that subtlety to know you’ve picked the wrong one.

      Even if that function had a unit test, I would have obviously tested for the wrong call for the same reason that I coded it for the wrong call. And of course the top-level calling function didn’t test for all the cases because it was relying on that lowest-level function to handle those cases, and that was unit tested. Actually, both were – the one I was wrongly calling and the one that I wrongly wasn’t calling.

      What’s particularly annoying about this is that it isn’t the first time by a long way. I’m one of the people who strongly argues that having lots of small functions (moving complexity into the call graph) doesn’t eliminate that complexity, yet this kind of thing happens quite regularly for me. So what’s it like for the people who have even more even smaller functions?

      There’s a well-known quote (CAR Hoare) that “There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.”. Well, my experience is that the first way doesn’t exist – a certain level of complexity is inherent in the requirements. A single-line function containing a single function call can be “so complicated that there are no obvious deficiencies”, because the complexity of the call graph is relevant complexity too.

    • Steve314, perhaps the do/it bdd style would help in that situation. It’s worked for me. So you’d have a test that tests how the two classes work together, described by he requirement it is solving. Mock anything that makes it run slow and you’re good.

  • biffsocko says:

    10 –

    boolean DEBUG = true;

    if(DEBUG){ System.out.println(“DEBUG: some statement”); }

    This way you don’t actually have to remove the print statements. You can even make it a command line switch to turn them off/on

    • John says:

      Just a note: be wary of turning this into.

      void debug(String foo) { if this.DEBUG {System.out.println(foo);} }

      If you have to, void debug(Object foo) { …

      I have been burned by costly code in the .toString (XML) being run by throwing an xml object. You only want to consider the string representation of the object if you’re sure you’re going to be in DEBUG mode

  • Heckman says:

    If you have access to compile time constants, it’s better to use those so your prints don’t even get compiled. Some people are just so convinced that the whole world is going to decompile their code the minute they release it.

    DEFINE DEBUG

    IFDEF DEBUG

    print(“Debug Stuff”);

    ENDIF

    • AntoineG says:

      I think most compiler will not compile an unreachable statement. At least, this is true in Java. Thus,

      static final boolean DEBUG = false; if(DEBUG) {System.out.println(“blablabla”);}

      Will not even be in the generated bytecode. So you don’t need to use a preprocessor or something like that.

  • J. B. Rainsberger says:

    “…good developers will keep cohesion low…”. Uh… surely you either mean “cohesion high” or “coupling low”. The rest of that comment is absolutely laughable. It’s not even an opinion, but pure nonsense. “Good programmers will do the right thing.” Yup. They sure will. How exactly did they become good programmers?

  • winjer says:

    These are uncontroversial opinions. I imagine pretty much everyone in our office would agree with most or all of them.

  • someoneOverHere says:

    RE: point 4 about comments; Yes poorly written comments are painful to read, beaten only by NO comments, you can make the code as readable as you like, but if its of a decent length then it NEEDS comments. End-of.

    • Mike Adler says:

      @someoneOverHere: I think the point is that while logically cohesive blocks (which can range from a single line to a single class, I suppose) or at least the interesting parts of your code should be commented, most of what you write is instantly clear from the code anyway, so the comments actually impede readability.

    • AndrewF says:

      I strongly disagree that any comments are better than no comments. (At least, I think that’s what you’re saying.) A bad or misleading comment can confuse or delay understanding of the code.

      For me, the requirement of a comment is a hint that the code is not crystal clear. (Not including API documentation comments of course, which you need, because typically the readers don’t actually have the source code.)

  • Re: #18: it took quite a bit more thought than I expected! When I tried to work it out, I wasn’t getting numbers that looked a lot like pi to me even after 300+ iterations of +/-(1/(denominator+=2)).

    https://en.wikipedia.org/wiki/Pi see: “Rate of Convergence”

    It converges quite slowly, though – after 500,000 terms, it produces only five correct decimal digits of π.

    Well that’s good to know! 🙂

  • Urist says:

    RE: Point 10.

    I use Visual Studio and you can right click on a breakpoint and turn it into a “When Hit, Print this” and log it to the output window. It is usually my second-third step in a long line of techniques to debug a particularly long running or intricate problem.

    But I agree sometimes, you just have to add manual logging.

  • carlos says:

    I honestly don’t think that creating a complex math function makes you a good programmer, who cares about math functions they are all already done in the language you use, I remember doing that back in college and but I’ve never needed doing anything like on any web project.

    • Matthew says:

      I think the point is not that it’s relevant, but that doing it is a basic litmus test of coding ability. You should be able to generate the solution in a few minutes.

      Spolsky and Atwood have complained in the past that many applicants have little actual programming ability, and can’t pass even a simple FizzBuzz test, so something like this is a perfect first hurdle to demonstrate that you haven’t bluffed your way into the interview.

    • Ben says:

      I agree, Carlos. It’s a ridiculous question to ask a person who writes code for a living, not solves math problems. An obvious better question would be to ask the interviewee to model the problem accurately. The only valid reason to ask a domain specific question (like a math question) is if its directly involved in the field your applying for. Example: if your writing encryption algorithms or writing audio codecs, you should expect to see math questions during the interview. However, if your are given math questions during an interview for a web developer job, your interviewer might be stroking his/her ego and is way off base. I have interviewed countless programmers and would have missed many talented, driven candidates if I asked such a silly math question.

  • snake5 says:

    Controversial? That’s funny ’cause I agree with every one of them. And there are few opinions in the world I could say that about.

    Well, if you need a truly controversial opinion, here’s one from me:

    Private/protected access modifiers and similar features (exceptions, fully automatic garbage collection) decrease code quality by letting programmers forget their code and their mistakes.

    Let’s tackle this one by one:

    • Private/protected – when was the last time it really prevented someone else from doing a bad thing? When did it tell the other developer how the interface works? From my experience, never. That’s your job – communicate, explain your interfaces if necessary. You’ll gain a lot of good feedback about your design that way too. Get closer to your code instead of keeping a growing distance from it.

    • Exceptions – they’re simply a mess; a leap of faith that is supposed to handle problems somewhere else in the code. You don’t know where, you probably don’t care either. I wouldn’t, too – exceptions make it hard to care about what handles them.

    • Fully automatic garbage collection is one of those things that cleans up circular references for you (as opposed to a simple reference-counting system). I could say that you need to learn manual memory management but I guess that’d be too controversal and not that useful. Circular references appear extremely rarely, if ever, in good code. But because of that tiny chance, they must be handled and that happens at a rather big cost. Even a transition to manual garbage collection (you’d have to call a function to do it) would be a big step forward since you’d care and know where such dangerous constructs can appear.

  • Zoidberg says:

    I find number 1 to be completely wrong. To stipulate that you must spend free time on spare projects to keep your coding skills honed is just so completely false. I think a well adjusted employee with plenty of other interests is a far more productive one who is capable of a broader range of thoughts an ideas. It also makes it easier for non-programmers/developers to talk to and relate to these people. I think that it is very closed minded to think that development is all about coding. It is much much more, and I feel that spending your free time doing something OTHER than what you do for 40 hours (or more) per week is what is needed to make someone an all around better professional.

    That being said, I couldn’t agree more with number 17. You don’t live to work, you work to live.

    • Owain Jones says:

      You replied to that as though you didn’t actually read number 17 before commenting. I spend a little bit of my spare writing code for projects I like, does that make me less well-rounded? Ignoring the time I spend playing music or going for long walks or getting drunk and arguing philosophy or… uh… skydiving?

      Usually I don’t feel like working on my own projects after a long day at work, you’re right you need to find a good work/life balance – but your comment comes across sounding like “Yo brah you should never code in your free time, that’s what NERDS DO”

    • katiec says:

      I think the idea about coding in your free time is a demonstration of your commitment to the craft, and if nothing else helps indicate how well you keep up with current technologies. It’s about whether or not you start projects at home that help expand you as a developer, not just you seeing how many times you can recreate the same software. I think solving interesting puzzles, no matter the context, expands you as a person and your ability to think about things from different perspectives.

      There’s definitely a work-life balance that needs to be struck, but at the same time there’s a lot of hours in the day; and personal projects don’t need to take much time. For example, I don’t like to do much when I get home from work, so I spend that on private coding projects time before heading to bed with a book. Weekends, however, I have more energy and do “normal” type of hobbies that have nothing to do with computers. I don’t think that’s all that crazy.

  • Former mathematician says:

    No wonder you didn’t get many good answers to your question about calculating Pi. You can’t answer this without a bit of math. Every comment on this subject was wrong. I hope mine isn’t.

    Someone said here that you need to know the constant Pi to check how many decimals you computed right. That’s wrong, of course. Yet it points where is the difficulty.

    Suppose you want 1 digit after the point. Your code computed Pi~3.14. Is this precise enough if your incertitude (e) is lower than 0.04 (AKA 410^-2), because then 3.10 < Pi < 3.18. So the required incertitude depends on the digits computed. If you want 4 digits, then the computed value 3.141593 will be right only if the incertitude is lower than 0.000007 (AKA 710^-6).

    So we have to know the convergence rate of the sequence S_k=4*Sum((-1)^k/(2*k+1)) If I’m not mistaken (too lazy to take pen and check it), |Pi-S_k| < 2*k. That’s all I need to begin coding the function required.

    Did the interviewer really expect this kind of answer?

    • Haakon Løtveit says:

      But remember that we’re talking about IEE754 doubles here. They are inherently inaccurate.

      So you might have to consider the inherent inaccuracy in the numerical representation as well.

      That is, testing for == with floats is not a smart thing for instance.

    • Erik says:

      No, he hoped you would note that the current error is smaller then the absolute value of the next term.

  • John says:

    Comments:

    Its interesting 4 (No comments) is right below 3 (use your brain)

    Basically comments should describe WHY the code is written this way, not what it does. If the code ‘needs’ to be in a non-obvious way because of a bug, or edge case that wasn’t obvious until some spectacular crash or worse, middling intermittent fault, then put that in the code! Not doing this, means people will be reluctant to refactor, you’re basically creating little land mines for the refactoring engineer.

    • Greg says:

      Refactoring Engineer! Where do you get one of these? Or do you just mean the guy who works works with you code next and decides to re-arrange it?

    • Yannis Rizos says:

      @Neal: The question was migrated to ProgSE, closed and deleted (it was also deleted on SO). I collected the top 20 answers, and then Shog went and undeleted the question on SO, and historically locked it. I took the blog post out of our publication queue, but then we talked about it, and we decided it was ok to post it after all.

      Check the blog’s chat room for more…

  • Charlie says:

    Re: #18

    In Haskell, you can just write:

    4 * (foldl1 subtract $ map (1.0 /) [1,3..300001])

    Or, more neatly:

    foldl1 subtract $ map (4.0 /) [1,3..300001]

    300001 is some reasonably large number.

    • uh huh says:

      Gee that is so….intuitive.

    • jaredc says:

      this is incorrect. the partial sums with even length are negative. it also doesn’t stop with five decimals places.

      foo = let xs = map (4/) $ zipWith (*) (cycle [1,-1]) [1,3..] in Data.List.find (\x -> 3.14159 <= x && x <3.1416) $ scanl (+) 0 xs

    • woelling says:

      There is a little problem with your LEFT fold – try 300002 terms instead of 300001. I would prefer:

      4 * (foldr1 (-) $ map (1.0 /) [1,3..1000000])

  • jcubic says:

    I agree with all of them except 17, it’s just not true, for Great programmers it’s more then just the job, they will do this for free. That’s why open source is so successfull.

    And those are not controversial (unless to me), just true wistom from the best.

    PS: why there is no voting on those?

  • 1 – I usually agree with this. Not because you CAN’T master something without doing it in your spare time, but because most people WON’T get good at something unless they really love it. I really love code, and I write a lot of it in my spare time, and I consider myself pretty good at it, but I wouldn’t be any good if I didn’t like it at all and just saw it as a job.

    I guess it is possible to get really good at something though, even if you don’t like it. Some things just come naturally to some people, but I wouldn’t want to work with someone who didn’t enjoy it.

  • Ryan Ternier says:

    Many would argue #1. Sure, learning your craft, and doing it in your spare time will increase your skill. However, that’s just working one side of the brain. Playing piano (or other instruments), doing art, physical exercise, and many others would also increase ones brain capacity.

    • David J says:

      I think you inadvertently answered the whole #1 vs #17 conversation

    • rkulla says:

      I think too many programmers take the word “code” too mean writing “secret symbols”. I’d much rather write “code” that doesn’t even look like code, but instead looks like english. Something that someone can skim and understand quickly because all they see are assertions and equations.

      When I was a noob, I used to love writing obfuscated concise code that looked like it was encrypted. The more mature and professional I became, the less this became true. And now I write most of my programs for people, not computers. It’s a challenge in and of itself.

      Anyway, more to your point, I think that learning to improve your english writing skills and your math skills, are even more important than writing more code in your spare time. Also, getting enough sleep and exercise is more important than cranking out yet another side project. That said, I always allocate at least some spare time for pure coding — especially katas. Oh yes, and reading blogs like this is very helpful.

  • @Manbeardo : You’re completely right. Anyhow this is my take on the question using itertools : http://pastebin.com/Keb6c0zX

  • Alex says:

    You really need a vote button next to each opinion. It would be really interesting to see how many votes each gets.

  • Sean says:

    Programming since 1983, now coding in Java and Ruby. I can confirm: All of the above is true.

  • Kostyantyn Kovalskyy says:

    2 lines of haskell code:

    let b = zip [3.0,5.0..1000000] [1..500000]

    4.0 * (1 – sum [if even x then -1.0/z else 1.0/z | (z,x) <- b])

    3.141594653585744

    (more precision then needed ;))

    It took me about 30 lines of C++, and the limitation on precision was much greater!

  • I’m going to take issue with 4 and 19.

    | Most comments in code are in fact a pernicious form of code duplication.

    I understand the argument behind this, that mismatched code and its comments can lead to confusion when revisited. However, I believe this “duplication” provides benefits beyond the obvious value of clarifying code to later developers; that is, as long as the originating developer takes care to keep his comments in sync with what his code actually does. When setting out to write a new method or class, I’ll often start by writing the documentation. This forces me to think in concrete terms about what I want to the method or class to do, what inputs it should accept and what it should return, and overall how this new addition of code will fit in with the rest of my application. Oftentimes, I can encounter and solve unexpected issues while writing my comments, saving hours of trial and error during integration testing. I’m trying to get into the habit that, once I’ve finished my actual code, I return to each of the comments I’ve written, asking myself “Are the assumptions this comment makes still valid?” If they are, and the code doesn’t match those assumptions, I’ve forgotten something. If the assumptions are no longer valid, I take five minutes and rewrite the comment. So yes, comments which do not reflect their respective code blocks are bad, but that doesn’t mean that all systems of commenting are bad. It just means we need to take our comments more seriously.

    | Design patterns are hurting good design more than they’re helping it

    I strongly disagree. I believe that the true value of design patterns lie in the fact that they are essentially mental exercises for programmers which encourage us to think about our code in solution-specific ways. In my experience, I’ve encountered much more unmanageable, ugly code caused by rigid adherence to more “straightforward” patterns such as procedural code (or worse, procedural code sinisterly disguised as OO), even in the face of a problem that would be so elegantly solved by an observer or visitor solution. Even if the eventual successful solution ends up being mostly procedural in design, if the programmer is even simply aware of these alternative patterns it will positively influence his minute design choices, perhaps affecting which chunks of code he wraps in functions and how they interact with each other, for instance.

    Rather than an indoctrination into various schools of rigid thought that are dogmatic in their application, exposure to alternative design patterns represents a psychotropic experience which opens our minds to the ways in which we can write our code. Maybe we’ll never try the singleton pattern again, but even simply being aware of that possibility, its benefits and its drawbacks, improves how we approach problems and investigate solutions, which invariably makes us better programmers.

  • Pulu says:

    Re: point 1.

    I stopped coding in my spare time and learned to play the banjo. These days, I usually play music for maybe an hour or so, spend time with my family and get a good night’s sleep.

    I may not be such a good programmer any more, but at least I don’t feel like my heart attack is just around the corner 🙂

  • scorciatoia says:

    Re #17: (whatever it is you do for a living) is just a job

    Then why didn’t you become a lawyer instead?

  • blah says:

    I find it amusing when a program website fails to be readable in a popular browser like Opera.

  • Christoph says:

    One opinion I believe and that is rejected by most fellows is:

    Programming languages that have uninitialized variables are better than programming languages where all variables always have defined values. At least in this respect.

    Often variables have to be declared before they are first used and before one has a meaninigful assignment for them. (For example class members that are initalized in constructors) It is better to have them uninitialized than to give them a default value like 0.

    Initializing the values can hide errors in the program logic and cause more harm than help.

    • Rob G says:

      How can having a potentially random value in a variable be an improvement? I can see efficiency benefits, where that’s a concern, but leaving a variable with a potentially invalid value.

      Consider a pointer, if its initialised to null, it clearly isn’t a valid address. Otherwise, it could be at least 1 in 4 times (assuming 32-bit aligned values).

      Most languages allow variables to be declared at point of first use nowadays. Where that cannot be applied, initialise as close as possible beforehand. For example:

      int value; if (x > 0) value = 1; else value = 0;

      Is acceptable, if the variable is further away than that, its probably safer to initialise it to a default.

    • Christoph says:

      Rob, accessing uninitialized values is a bug that is easily found by valgrind. There is no need to be afraid of them.

      For some variables all possible values are meaningful. For example a NULL pointer might mean that one is not interessted in a specific calculation.

      Consider the following situation that happens quite often: You have a pointer ptr that should be set to NULL or to the address of an object in a function. Due to a logical bug in your code the function is not called.

      If you then have code that acts upon the result

      if (ptr != NULL) …

      then it will be very hard to find the bug. If you on the other hand do not initialize the pointer at all a crash or valgrind will tell you that you access uninitialized memory.

      It is always better to have bugs that crash than bugs that silently change program behaviour.

      I tried to give an example in my second answer to: https://plus.google.com/u/0/104591613207462212978/posts/8XyT9pcsh95

  • Debiprasad Ghosh says:

    8.”If you only know one language, no matter how well you know it, you’re not a great programmer.”

    Languages are classified by 1) imperative 2) object oriented 3) functional etc… It is better to learn at least one language from each class.

  • CAS says:

    Surely you just write a loop that adds one term at a time, and continues until the first 6 decimal places no longer change (6 to avoid rounding errors). At each step compare the revised estimate with the previous estimate (to obtain the number of most significant digits that match). I’m pretty sure I could do that in a few minutes in Java, C++ or C#, but then I’m just an unemployed amateur programmer, so what do I know? The trouble is, candidates find it hard to think when they are nervous. The hardest part is reading the scrawled, inaudible and illegible challenges one has to copy to post on this blog!

  • On #5: Reportedly, Einstein never memorized his own phone number. When asked why, he said “Why should I? It’s in the phone book.”

  • TS says:
    • Writing good code is just a matter of proper thinking, but writing proper log messages is an infinite learning process

    • If it looks complicated (not to be mistaken with ‘complex’) it is usually done wrong

    • writing specifications is theory, writing documentation tells you whether it works well in practice

  • ChadF says:

    @Tom – That was my first thought too, given the requirement as stated and not as “write a function that calculates Pi to an accuracy of N decimal places, using N = 5 for a test case”. True there is not much calculation, but the CPU (compile-time or run-time) does some work creating that number.

  • jmhl says:

    Re #18: using that formula to compute pi is, to a mathematician, at least an equivalent sin, or worse, to solving a sorting problem using bubble sort. If I was interviewing someone with Calculus 101 on their resume, and they suggested that method to compute pi, I’d reject them immediately.

  • JKirchartz says:

    18 : 1st question: Too much math, speaking of which Math.pi usually does the trick. 2nd question: if you can’t write a function to perform pi*r^2 there might be a problem.

    I once had an interview where they asked me to convert between decimal, octal, and hex in my head, I just don’t think that’s a necessary skill, I can write code to do that for me.

  • Bob says:

    I think I disagree with #4, because his comment on comments was not very enlightening. If he is talking about comments on every line of code, then I agree – not necessary and also loses maintainability.

    However, a “block comment” at the start of a subroutine/function describing its purpose, inputs, outputs and/or any other relevant factors should be commented. As long as the code is NOT convoluted and easy to follow, then there should be no need to comment every line in the subroutine/function (exception may be some line that has non-obvious implications, but that should be a rarity). A one or two line function that is obvious in what is being done does not need to have a “block comment” (again, as long as there are no surpises).

    In other words, a judicious use of “block comments” will help in understanding what a program should be doing during its execution. As long as the intent of each subroutine/function remains the same, there should be no need to “maintain” the comments should a fix be required.

    Obviously, the program itself should have a “intro block comment”.

    • rkulla says:

      In practice, the only truly critical comments are the ones for obscure bug fixes. For example, if you’re writing some CSS statement that forces a specific browser to render something properly because said browser didn’t adhere to a standard and it’s the only way to make it work, then there should definitely be a comment explaining why you did this since it will be totally unobvious to most everyone else who sees it.

      But most comments other than those types are indeed useless, and become so out of date or misleading that they’re not to be trusted and do nothing bug clutter up the code. And if your code needs that many comments to explain what it’s doing, then it’s probably terribly written code.

  • Windoze says:

    For #18, actually you cannot write “function” in C#, there is no such thing in such language, you can only write “method”. That’s why C# and Java can never be “functional”, they can, however, be something like “delegational”?

  • S-K' says:

    18: What a complete idiot. Converting mathematical equations into a programming language is still MATH. Unless the job involves hardcore math, ASK LOGIC QUESTIONS.

    • rkulla says:

      Uh, math questions are totally relevant to programming. Ever do ANY web development? Try writing a modern web app that doesn’t require understanding ratios such as aspect ratios for resizing the browser and keeping images in proportion, and percentages for progress bars, etc.

      You never know what you’ll be tasked with, so being able to things into code is the programmers JOB. If your boss tells you that you need to write a free-transform tool to rotate images, you bet your butt you’re going to need to learn some trig to pull off the task. Or if you’re working at all with html5 canvas or with SVG, you’re going to need to understand some matrix math and certainly basic geometry.

      Further, how will I have confidence that you can write efficient code if you don’t care at all about math? It means you won’t find the most efficient and elegant algorithms, and will instead write SLOW and verbose brute forced garbage.

      As with not learning to unit test, the whole “I don’t need to know math” is yet another common excuse that lazy and poor programmers give. It’s time to get real.

  • rkulla says:

    2 is absolutely incorrect. Writing tests first does help you write better code. People always claim that they’re writing decoupled and modular code and then when you go to put tests around it you see just how untestable that code really is. This is especially true when it comes to writing mocks.

    Most programmers think that ObjectOriented programming means they’re simply using ‘classes’, yet they often have methods that are doing far more than one thing, or they’re doing too many extra things to get their one job done. Unit tests keep you focused on the minimal required to get the task done. Plus, you end up with a nice spec of what the program is expected to do, which is much more honest than what untested code says because normal/untested code often lies.

    I understand that it’s not realistic for most people to write most of their code as tests first, but they should at least try to with APIs and other critical pieces, especially backend code. The only people I’ve seen who diss TDD are the ones who never learned it. It’s just like how people like to diss programming languages they never learned to use.

    • rkulla says:

      Number 2 also totally contradicts number 11 because without unit tests new programmers will be far more afraid to touch the inherited code because they only way they’ll know if things broke is after they have committed and QA (or worse, their users) find all the bugs it caused, and then the new programmer has to try to rush to fix the bugs, which naturally causes even more bugs, and so on — until all their “new tasks” consist of is fixing bugs and not being productive.

      As talked about in the book “The clean coder”, readable code is indeed important, but it’s not more important than tests. You could give someone the ugliest codebase in the world and, as long as it has unit tests, I won’t be afraid to clean it up and otherwise modify it. Conversely, the cleanest code in the world will still be sketchy to touch if it doesn’t have any tests.

  • Peter Turner says:

    I had a pretty controversial post on programmers.SE, the Gist of it was “Relational Databases will be the next thing considered harmful” I think I garnered a -17 from it – but a net +100 rep from pity.

    2 years later, truer than ever.

  • rkulla says:

    I’d love to see Number 6 “Not all programmers are created equal” become a spin-off blog article about “20 controversial managerial opinions”, preferably written by programmers for managers.

    One item on that list could be about how it’s a myth to believe that just programmer’s are in such high demand that managers should never fire anyone, no matter how bad they are.

    And I don’t just mean skill wise, but also attitude wise. All of my peers, as well as myself, have witnessed a programmer whose attitude is so poor, rude, distracting, and otherwise negative, that it hurts everyone else on the team. Yet this type of person never gets fired, and rarely even gets talked to about it. I’m sorry, but programmers aren’t that irreplaceable. Yes, it can be hard to go through the interview process and find someone talented, but it’s worth it compared to making all the rest of your programmers miserable. I’ve seen good programmers quit because they don’t like being around that type of person and they will usually blame management for not doing anything about it.

  • Chuck Jolley says:

    The first five times you talk to a user about business rules, treat everything they say as a lie.

  • Somebody says:

    Setter functions enable you to easily watch values change (and identify the culprit) when something is getting trashed, esp without a sophisticated debugger.

    I doubt very many programmers would recognize a code segment that computes the cross product of a set vectors, that is why there are comments. One problem, illustrated in the Pi answers above, is that many programmers like to cram as much into one statement as possible, not realizing that compilers have to store intermittent values, whether the programmer gave those values names or not (byte-code languages possibly excepted).

  • BoobJob says:

    What a bunch of losers.

  • James says:

    10 – Completely disagree. Prints are only really useful if your app is singlethreaded. If you have a problem such a race condition the very act of printing can reschedule threads so that a problem is hidden/not reproducible. They should only be used as an heuristic to narrow down to the problem.

  • Arran says:

    First point is the only one I disagree with.

    Life is too short. If you like making projects outside of work, great, this is not a good measure of “how good” a programmer you are.

    How good a programmer you are should be based on other things, besides “how much time do you spend on your own side projects”

  • Olivier says:

    I agree with everything except #17. Like most people have said, #1 and #17 are the opposite. Even when I’ll be retired I’ll continue to code, it’s part of my life.

  • tz says:

    I’m amazed that this list contains most of my pet-peeves. People come from some college program and learn bad habits. Liturgy instead of theology. Object oriented paradigms are great for SOME things – Qt is one example, IOKit from apple (SCSI applies to USB and Firewire, as well as the disk – CD/DVD ROM – writer… so superclassing and inheritance makes for a really elegant solution).

    Instead everything is fragmented into a deep stack of one-line methods, instead of saying A=B, it becomes setValueOfA(getValueOfB()), and you have to open a half dozen header and cpp files and do a lot of searching to realize that all it does is A=B. Setters and accessors can serve a purpose – e.g. is the A/D value stale? Will setting this cause some kind of functional overflow? Then if you throw an exception do you really write all the test code to validate all that exception handling?

    Design patterns are another abused tool. Sometimes code is a 500 line singleton, a series of non-looped related sequential steps where state overlaps that there is no reason to split into separate routines (where you have to pass context and state information across). The topology of the code either corresponds to the topology of the problem and design or it doesn’t. When it doesn’t, it isn’t clear no matter what the book says – but the temptation is to force it into your pet methodology or technique even when the opposite is called for. Worse is the belief that the methodology substitutes for documentation.

    For me I always at least try document the 500 lines in detail – taking several pages to detail things (about 1 page per 50 lines), but when someone turns it into 3000 lines across many routines and files, they never write the similarly detailed documentation which now has to include not only the function of the 500 actual lines, but the dozens of interfaces, state and context data (wherever they are now since they have to cross those boundaries), etc. which would become hundreds of pages if done to the same level of detail. No interface, no interface specification, test fixture, detailed doc for the interface.

    Even worse is the wonderful technology. Everyone uses (often implicit) malloc/free, and wants to create threads. I will spend hours if not days removing every possible dynamic entity that makes the code runs non-repeatable if not actually chaotic (things that fail once a month but you can never get them to repeat). The thread that isn’t there can’t collide or deadlock. The malloc that isn’t there can’t cause a memory leak.

    Zlib 1.1.x (and probably the latest) uses malloc/free, but the allocations are predictable and in a LIFO fashion so you can just statically allocate a buffer and move the curtain pointer.

    Simplifying an apparently complex design and writing simple, straightforward code is the art of programming and is so much higher than being able to write complex code. That is the only thing that matters, will this paradigm, method, refactoring suggestion, etc. simplify the code? Can you reach what the “Programmer’s Stone” calls the “Quality Plateau”? Or are you going through the technique, method, or whatever for the sake of doing that activity and not because it will objectively make the code simpler.

    The buzzwords merely allow bad or mediocre programmers to write bad code that limps through the tests but we end up with crashes, security holes, or bloatware. But you can’t fool physics. On a mobile device with finite battery capacity, using 1000 cycles instead of 10 will show up. Green technology? Rewrite the stupid bloatware!

  • memnon666 says:

    Nice ideas/arguments, thx 🙂

  • nomen says:

    I think in some cases unit testing is the most important part of the software development. To test a code first and write after it’s a bullshit. But think! You create an entity that should work in a million of circumstances. How do you know it’s working without testing it? Do you test manually each time all the possibilities when you modify something in the code? How do you know still working for all the cases? It’s a bullshit to write testing for getters/setters, or some little functions. But to be confident that your code will work in any case, you need an automated test. You can’t test each time everything. And every time you found a bug, you can add a test for it. And you know that all the previous cases still working.

    • nomen says:

      I often see programmers doing complicated functions that will work for only one case. What is needed on that case. Manually tested. The result it’s ok. And that’s fine if we don’t want to reuse that function, or each time we have the same parameters for that function. But later we need a second case, and a third case. And we have hundred of functions, with many cases. Buggy, unstable software are born in this way.

  • Anonymous says:

    From the point of view of people not having a reasonable consensus on an opinion, the most controversial programming opinions are near the middle (to either side). From the point of view of most people disagreeing, the most controversial are from the bottom up. My two cents: http://programming-motherfucker.com !

  • Dhanasekar says:

    Almost all the opinions are acceptable but opinion about design patterns is not pretty good because applying patterns to the place where it is unnecessary then it is called over designing or a bad design.. THere is nothing that design pattern can do.

    But 5/5 to all other opinions 🙂

  • Diego says:

    I could add that “Developer and Designer” is an oxymoron. Unless one is a real genious, he/she cannot be an excellent Developer and an excellent Designer. The two require almost opposite mindsets, and a skill set so large that it would take too long to master both. He/she could be good at one and below average at one of the two (or both), but declaring oneself expert in both is just ridiculous.

  • “Most comments in code are in fact a pernicious form of code duplication.”

    Disagree — this is not a controversial opinion, just a symptom of bad comments.

    Good comments explain who wrote this code, why they wrote it (what is the business reason for a programmer spending time writing), their chosen approach (and why this approach was chosen) and their rejected approaches (and why these approaches were rejected). None of these are “code duplication”.

  • Nate says:

    The pi question is solved quite easily in J:

     pi=.verb def '4*-/%>:+:i.>:y'

      pi 5
    

    3.14159

    If you tried this in a job interview, though, your interviewer would probably think you’re just BSing.

  • ajswssa says:

    These are uncontroversial opinions. I imagine pretty much everyone in our office would agree with most or all of them.

  • […] 20 controversial programming opinions « Software Engineering Stack Exchange Blog […]

  • Comments have been closed for this post