Pros and Cons of VB, C#, and "concept encapsulation"

mskeel

Senior Contributor
Joined
Oct 30, 2003
Messages
913
Prompted by the differences between how Visual Studio 2005 handles designer generated code in Visual Basic and C# in this post, a conversation of what was hidden in VB versus C# versus IL versus Assembly began followed by a debate as to whether it was a good thing that so much information be hidden from VB developers and your opinions on it. This is a continuation of that conversation and debate.
******

Marble_eater, technically, at least in the C++/Assembly world, everything reduces down to do-while loops. Loop performance is going to be affected by a number of factors including but not limited to function calls within a loop (as you said) but also paging/memory use and how you access your data. As I understand, the .Net compiler does a pretty good job of optimizing loops through techniques such as loop unrolling and in-lining some methods, but as your experiment shows, there is still a significant hit in the overhead of declaring and using an iterator. Is this really a deal breaker? I guess the answer is that it depends. All I have to say is Moore's law, how often do you deal with 100,000,000 entries in an ArrayList, and I would hope a person in a sitution needing that much speed would know a thing or two about programming and how a computer works.

For the, initial target audience of VB, I think it is 100% unnecessary to know this information and it would just get in the way of the purpose of VB -- to make it easy to quickly crank out great software. Delegates and event binding get in the way. Designer code gets in the way. Lack of foreach gets in the way. For that matter, pointers and memory management get in the way.

Just remember that every tool in the tool box has a purpose (otherwise I hope it wouldn't be in your toolbox!), but every craftsman always has their favorites.
 
I am just going to (re)reiterate--the loop example is just that. An example. There can be other kinds of benefits from knowing what is going on behind the scenes.

Maybe that kind of attention to details is what separates a good programmer from a diligent programmer. Either that or a normal person from a person who has OCD. I like to have a more thorough understanding of what is going on when I write a program (or do anything, for that matter). A richer understanding of details can help you see things in a different light and show you more possibilities.
 
marble_eater said:
Maybe that kind of attention to details is what separates a good programmer from a diligent programmer.
My opinion is that it seperates the mediocre programmers from the great ones. Which is why, even though VB seems somewhat limited and not as powerful, if you know what you are doing you can still make some really amazing programs. And in far less time and stress than working with C++, considered by many to be the most powerful language of them all.

marble_eater said:
Either that or a normal person from a person who has OCD.
:D That too!
 
As a highly versed x86 assembly language programmer...

The problem with "hidden" information is that often that information is useful knowledge for making (or avoiding!) algorithmic changes that would alter genuine real world performance.

As most decent programmers know, the big optimisations are algorithmic changes rather than code reordering/inlining/whatever tweaks.

Is that list iterator on that class in the class library thrashing the L1 cache or not? (basically, how local is the auxiliary data needed for iteration management .. can't know.. its hidden from me.. a derived class also might behave differently)

Thats important info that could easily help me avoid trying multiple strategies (ie, "one item at a time, many operations" vs "one operation at a time, many items" vs "neither is acceptable, must get to rethinking the overall plan")


of course.. none of this means anything unless performance is an issue.. but I think that more often than not, most GOOD programmers are slamming their head against the performance wall frequently (because that sort of experience is PRECISELY what made them good to begin with!)
 
As a former C/C++ I can appreciate where you are coming from, but I think it's time to move on from this kind of thinking. One of the caveats we all accepted when we started using higher level languages was that we are going to lose some control and that we'd have to trust the compiler to take care of our code in such a way that it won't hurt our program. Without this trust, we're back to hand rolling assembly to get the "best" loop performance possible.

Does an iterator thrash on the L1 cache or not? As a .Net developer, that isn't my concern. That's the concern for the person building the latest and greatest optimizing compiler for me. That's the concern for that sorry sucker still stuck hacking C code in some basement someplace -- wishing he didn't have to declare his variables at the top of his functions and longing for an iterator to make cycling through his malloced arrays easier to read. As a user of a modern computer, that also isn't my concern. If you are experiencing a noticeable slow down in computations due to L1 thrashing then you either need to get a better computer or not write your program in .Net.

Trusting that the compiler is taking care of reorganizing my elegant, human readable code into efficient, machine readable code allows me to use my brain power to tackle much more difficult and important issues -- like not introducing bugs into my system and making my system usable by people. It lets me concentrate on the algorithms I'm writing, and not the tricks I can use to make my algorithms work for a specific architecture or machine configuration.

Losing sleep over all of this stuff in the .Net world is a waste of time. If you are that concerned with performance you need to use another language.
 
mskeel said:
Without this trust, we're back to hand rolling assembly to get the "best" loop performance possible.

"Best" isnt a stated requirement.. but such issues can have very large effect on performance (I'm not talking about small performance increases here, I'm talking about large ones... still not approaching "best")

mskeel said:
As a user of a modern computer, that also isn't my concern. If you are experiencing a noticeable slow down in computations due to L1 thrashing then you either need to get a better computer or not write your program in .Net.

You know.. i've heard this sort of "reasoning" for several decades.. it wasnt true in 1986, it wasn't true in 1996, it isn't true in 2006, and it wont be true in 2016.

If your programs have no time-critical code then good for you.

Some of us bang our head against the performance wall regardless of how fast our processor is... because we desire to bang our head against it! A faster processor simply means that we can do more, and we fully plan to do so.

mskeel said:
Losing sleep over all of this stuff in the .Net world is a waste of time. If you are that concerned with performance you need to use another language.

I've been hearing this for decades too.. it used to be said about C, then about C++, always about the various Basic and Pascal compilers (incorrectly), and so forth...

BTW: .Net isnt a language
 
Rockoon said:
"Best" isnt a stated requirement..
But what's best for your program is. I put that in scare quotes to highlight the interpretation of tradeoff. In this case you could be sacrificing maintainability for performance. Is your program better or worse for it? I'd hope you always produce the best work you can at that time. Personally, I'm of the opinion that maintainability is more important than performance so I'm willing to make certain sacrifices. I think this is just where we differ in opinion.

When I'm writing code in .Net, I'm usually writing business-style applications. Meanwhile, when I'm writing code for embedded real-time systems, I'm writing in C++. Why? Because I know that no matter how hard I try, any .Net language I use is going to run 5 to 7 times slower than the same code compiled natively (win32, gcc, etc.). That's just a given. Thus if preformance is *that* important, you've already got a handicap with .net so what are your really trying to accomplish?

Don't get me wrong. I'm not advocating that code be written with reckless abandon. If anything, the experiment that marble_eater ran earlier showed that it is important to carefully consider the decisions you make in your code. I'm just of the opinion that it's silly to get bent out of shape trying to squeeze extra performance out of pure .Net code just for the sake of running a few milliseconds faster when in the grand scheme you could have leaps and bounds improvements by simply choosing another language. I'm a fan of concept encapsulation and making things as easy as possible for people to write code. Why bog a manager down with curly braces and semicolons, or arrows, dots, and stars if all he wants to do is write a quickie program to help him sort his email or something? Why make anyone worry about memory management when it clearly isn't needed?

Rockoon said:
BTW: .Net isnt a language
.Net isn't a language per se, it's a platform. But seeing as how VB.Net, C#, managed C++, Python, Delphi, and many other languages can all be compiled to the same IL which is then compiled Just In Time by the .Net CLR for execution on the .Net framework, you can make certain generalizations about the various .Net languages because they do boil down to the same intermediate language. Plus, when you're using the various classes offered by the .Net framework, it's really just a matter of semantics:
Visual Basic:
Private Sub DoSomething()
   Dim listy As New ArrayList
   For i As Integer = 0 to 50
      listy.Add(i)
   Next
End Sub

C#:
private void DoSomething()
{
   ArrayList listy = new ArrayList();
   for (int i = 0; i <= 50; i++)
   {
      listy.Add(i);
   }
}

Code:
private: Void DoSomething()
{
   ArrayList^ listy = gcnew ArrayList();
   for (int i = 0; i <= 50; i++)
   {
      listy->Add(i);
   }
}

Library/Class support is vastly improved over previous languages such as C++ (if you’ve used Java, .Net should be very familiar). Because the same framework is used for VB, C#, etc. you should be able to pick up, read, and understand any of the languages despite the syntactic differences. The end result is a much easier learning curve, better collaboration (because you can write in the language with which you are most comfortable and they interop for free), and more maintainable code (because you aren't constantly reinventing the wheel or relying on your personal set of home grown tools/libraries -- every C++ programmer has their own little tool kit).

That's sort of where this conversation originated from. The fact that there are differences in the syntaxes of these languages yet they still compile to the same IL rubs some people. The languages are different and are meant for different people, but they all still use the same CLR and that binds them together and makes them similar to a certain degree – at least to the point where we can make relatively broad or general statements about writing code in .Net. That’s what this whole forum is sort of centered around. It’s kind of a new concept but it’s pretty cool in my opinion.
 
mskeel said:
When I'm writing code in .Net, I'm usually writing business-style applications.
I totally agree with that. When I do some programs for my company, my boss doesn't care that I can process a 300k csv file in 2.5 seconds instead of 3. He want the job done. So even if this file would take 10 seconds it would be okay. So why should I try to get those milliseconds in it? There is no reason unless this task is more critical.

Of course, speed is important. My boss is really happy that we can generate invoices in PDF in 3 hours instead of 2 days with the prior system. But we are talking about huge improvement to code that was done by a really poor VB6 programmer.

I'm not interested in gaining 23ms for a task that last 2 sec. I'm not interested in gaing 1min in a task that last 3 hours.

But I'm interested in cutting those time by 50%. So speed is relative. Since .NET wont allow me to do those kind of gain (neither a C++ software would), I better do a program in .NET in 1 day than to build a C++ software in more than a day (including debugging, verifiing memory leak, etc...).

When you talk about speed gain, you always have to ask yourself "how much time will it take me?" and "Will my boss is ready to pay for that?". Because most development are paid and company don't care for a process that is processing 0.001% faster (unless time-critical process of course).
 
Concept encapsulation doesnt have to be equal to hidden information..

Programming languages represent abstract machines.. this includes Ada, Basic, C, Delphi, Forth, Java, Lisp, Pascal, Small Talk, etc.. they all have represented concept encapsulation without hidden information in the past.



C++ compiler will produce code "5 to 7 times faster" ???

Maybe in very specific examples with worst case code vs best case code.. most probably due to *hidden information* not indicating that you were infact jamming one of the worst cases through the language/compiler/framework that you were using..

The big micro-optimisations these days are typically

(1) memory cache related or
(2) those cases where ASM offers abilities not represented by an abstract language.. operations like rotate through carry which simply can't be coerced out of any high level language that I am aware of ..
(3) when the compiler consistently makes a bad choice such as GCC's habitual conversion of constant multiplication's to a series of shift and add's (small gains vs big losses) .. but this is a special case like everything else.. compiler-specific not language-specific.

.. so no, C++ isnt going to give you 5 to 7 times more bang for your buck over the long haul.. more like 1.2 times at the most... these are all abstract languages and if its being done with one up-to-date compiler to good effect, its also being done with the other up-to-date ones (less the special cases)
 
C++ compiler will produce code "5 to 7 times faster" ???
This refers specifically to the hit caused by the JIT compilation of the IL versus the already compiled and ready to roll machine code of gcc/g++ or Win32 VC++. At the same time, subsequent executions of the same .Net application will be faster because of caching -- but that's only when you run the same app several times in a row. I guess I should have said something like "generally" or "for the most part" .Net will run 5 to 7 times slower than similar code complied with g++. The bottom line though, it takes time for the CLR engine to crank up to speed before it's even able to start to work on the application you want to run.

I will concede that my stats might be out of date with regards to newer versions of .Net, Java, and gcc/g++. But that definitely was true at some point in the past year or two. Everything changes so quickly it's just hard to keep up sometimes. ;)


Rockoon said:
...they all have represented concept encapsulation without hidden information in the past.
And I've heard this argument made over and over again as well with each new iteration of a higher level language. Let's take a step back for a moment, though. Really, when you take a language to a higher level of abstraction, you're going to be hiding choice pieces of information that will, at least in the author of the language's opinion, make the language better in some way. It seems that a person's perspective on how much or how little is hidden (and how positive or negative that is) is determined by that person's first language. This is one of the reasons why I think it's a rotten idea to teach Java to students as their first language. So much is hidden from them that they don't even know exists. I'd make the same argument for not teaching C#/VB .Net to students as a first language.

In order to make details easier for the programmer you have to hide something. Otherwise concept encapsulation can't exist and the very idea of abstraction doesn't make any sense.
 
Back
Top