Memory management is handled by the CLR, but it is still very important. I personally, manually dispose (using blocks) all my objects in my C# code to minimize garbage collections at the CLR level...
Btw, what you say above is right, but for the wrong reasons. You should be using 'using blocks' and you should be ensuring that objects with dispose methods have dispose (aka close, etc) called as soon as possible withing your code path.
Every MS API (not almost every) that I know of, calls Dispose() from within Close(). In fact MS asks you to call Dispose from within Close. As long as you keep track of this, calling Dispose not only closes/frees the resource, but puts less burden on the GC. Here is an example:
http://msdn2.microsoft.com/en-...lconnection.close.aspx. Look for the line that states "Close() is equivalent to Dispose()." By calling Dispose directly, you're skipping calling Close in most cases . Of course this is not a major thing, but it does *minutely* speed up things in an enterprise application. If third-party API's don't follow this trend, it is not MS's fault.
That is completely wrong. GC.SuppressFinalize tells the garbage collector to not call the finalizer when it DOES garbage collect the object. The garbage collector still needs to release the memory used by the object, it just knows not to call finalize first (since you've told it you've release all other object references and it doesn't need to bother with the call). The code/cpu time is the same. Either your going to finalize in your dispose or the garbage collector is going to call finalize when it reclaims the memory. But where you got the notion that it tells the CLR not to garbage collect the object I have no idea.
I did state it incorrectly. I meant it tells GC not to
finalize the object. Again, my point stays: The managed environment doesn't know anything about how to clean up unmanaged resources. *You* by calling Dispose, are freeing memory and resources. Also your statement "Either your going to finalize in your dispose or the garbage collector is going to call finalize when it reclaims the memory" is erroneous. You actually call your Dispose method from within the Finalizer (not the other way round as you stated). And you do this, because the Framework doesn't guarantee if Dispose will be called... but it does call the Finalizers. Look here:
http://msdn.microsoft.com/msdn...es/07/07/CLRInsideOut/
Here is an excerpt from the article: "Specifically, the garbage collector will never call your Dispose method for you. The only cleanup method that the garbage collector will call is the object?s finalizer; if code is not written to explicitly call the Dispose method, then the method will never be called."
Internally SupressFinalize is a link list of object that have already been finalized, the GC knows for those objects it can free their memory or rebuild contigous memory regions depending on its needs. The address space the object occupied by ever means still needs to be garbage collected.
Again its a wash since you spent those cycles in dispose. The only difference from the system view is that you now control when unmanaged items are released (which is a good thing).
I am thinking this was a typo, but I dont know what you mean when you say "SuppressFinalize is a link list of object that have already been finalized?" SuppressFinalize is a method and takes in a single object parameter that needs to be removed from the set of objects maintained by the GC that require finalization. Even Jeffery Richter's CLR via C# asserts the same thing - Calling SuppressFinalize() (assuming you implement both a Finalizer and 2 Dispose() methods) puts less burden on the GC. I should have stated in my previous post that as far as the process is concerned, it puts less burden on the memory... somehow I digressed to discussing CPU cycles. *But* even in terms of CPU cycles, you're helping the GC by not forcing it to resurrect the full object graph for finalization. Look here,
http://msdn.microsoft.com/msdnmag/issues/1100/GCI/, under the topic "Forcing an object to Clean Up." Here is an excerpt from the article: "When you call SuppressFinalize, it turns on a bit flag associated with the object. When this flag is on, the runtime knows not to move this object's pointer to the freachable queue, preventing the object's Finalize method from being called." Also, refer to the example that Jefferey provides in order to show this. By all means, this is a dated article, and here is a more recent explaination:
http://msdn2.microsoft.com/en-...m.object.finalize.aspx
I argued your initial post because it simply asserted (too naively) that you as a developer don't need to worry about memory management in the managed world. This sounded too easy, and also becomes a pick-up line for .NET-bashers... to them it is a very simple life in the .NET World
I'll take your word that you were on the external design review committee. I am just curious as to which one: .NET 1.0, .NET 1.1, .NET 2.0, or .NET 3.5? Almost every iteration of the Framework undergoes immense changes and it is hard for me to acknowledge someone's assertions that may date back to .NET 1.0 days (no offense). Same can be applied to my assertions throughout this post, but I am always learning something new, and I'll openly accept my mistakes. I may have typos (I get excited by .NET discussions), but I'll try and back my assertions via realistic articles/books in most cases.