Wednesday, August 22, 2007

SecureString in c#



Represents text that should be kept confidential. The text is encrypted for privacy when being used, and deleted from computer memory when no longer needed. This class cannot be inherited.

Storing any sensitive data like passwords etc in the standard System.String can be a potential threat to the data for the following reasons:

>> It is stored on the Managed Heap and is not pinned in the memory, so the garbage collector can move it around at will leaving several copies in memory. The code will not know that this has happened, and even if it could figure out that the string was moved, there is no way to clear out the other copies. Instead we have to wait for the CLR to allocate another object where the sensitive data is so that the memory gets erased.

>> It's not encrypted, so anyone who can read process's memory will be able to see the value of the string easily. Also, if the process gets swapped out to disk, the unencrypted contents of the string will be written to the swap file.

>> It's not mutable, so whenever it is modified, there will be the old version and the new version both in memory

>> Since it's not mutable, there's no effective way to clear it out when you're done using it

Hence, .NET 2.0 introduced a new class under System.Security namespace called SecureString, that can be used in place of standard Strings to store sensitive values.

Using SecureString eliminates the above mentioned issues as:

>> The SecureString is not stored in the managed heap while standard strings are and therefore it will not be replicated to multiple locations in memory.

>> SecureStrings are stored in an encrypted form. They need to be decrypted when they are used. this period of decryption can be kept as small as possible. So even if the process is swapped out to disk while the string is encrypted, the plaintext will not end up in the swap file.

>> The keys used to encrypt the string are tied to the user, logon session, and process. This means that any minidumps taken of the process will contain secure strings which are not decryptable.

>> SecureStrings are securely zeroed out when they're disposed of. System.Strings are immutable and cannot be cleared when you've finished with the sensitive data

create a SecureString, you append one character at a time:

System.Security.SecureString secString = new System.Security.SecureString();

When the string contains the data you want, you can make it immutable and uncopyable by calling the MakeReadOnly method:


To read the secure value, use the SecureStringToBSTR() method as follows:

IntPtr ptr = System.Runtime.InteropServices.Marshal.SecureStringToBSTR(secString);
string sDecrypString = System.Runtime.InteropServices.Marshal.PtrToStringUni(ptr);

The garbage collector will remove SecureStrings when they're no longer referenced, but you
can dispose of a SecureString by using the Dispose() method:


Thursday, August 16, 2007

An ATOM feed ticker (scrolling one)

Just during my leisure time, while I had a small break from work, I gave a look to my blog that looked quite ugly and out came a thought to beautify it and in the process learn new things.

The best part of it was creating an ATOM feed reader for my blog. I finally succeeded in creating one using the idea from
Dynamic Drive.

Here I could create a ATOM Feed scroller which would show all the posts on the blog and also give a pause at each and every post with a link to the original post on my blog.

A sample could be seen on
THIS SITE where I have hosted it (This is a trial and hense would only be available to me till September 12 2007) as well as on the top of this blog.

By that time I would be looking to modify it so that it just required the client side code and no server side coding is involved.

Currently it uses an aspx page to display the posts as there is a bit of server side code involved in it. I would try to eliminate that ASAP.

Once done, I would make this a portable widget that could be used to display any ATOM feed providing its URL.


Friday, August 10, 2007

SelectSingleNode not selecting the node.

Recently I was working on creating an ATOM feed reader. Obtained the JS from DynamicDrive and coded the control to take up the URL and return back the posts from it.

It required XML reading and playing around with the nodes. Strange enough, looked easier to work, I had a hard time getting the node required to display the things out.
Below is the format that an ATOM xml uses:

<?xml version='1.0'


<feed xmlns=''

<title type='text'>Ashutosh Vyas's

<generator version='7.00'
<title type='text'>Asynchronous
Page Concept in ASP.NET</title>

<content type='html'>
<link rel='replies'

<link rel='self'


Now all I needed was to find out the root node and traverse to the Node "feed/title" to find out the title of the blog to display on the top of the scroller.

To my knowledge, it was as easy as
But that did not happen to be the case. It always returned me null.
I tried grabbing out the root node (feed) using
but this would again return me the same NULL.
Strange for me, doing a rssDoc.DocumentElement() would most certainly return me the required feed node.
After a bit of help from MSDN and other group, I discovered what I did not knew till now and I suspect many ppl do not because of lack of use.
You require a NAMESPACEMANAGER to get those nodes out.
So to dig out something from


we need the following code.

XmlNode feedNode = rssDoc.DocumentElement;

XmlNamespaceManager nsMgr = new XmlNamespaceManager(rssDoc.NameTable);


String feedTitle = feedNode.SelectSingleNode("prefix:title",nsMgr).InnerText;

-- Ashutosh

Thursday, August 2, 2007

Asynchronous Page Concept in ASP.NET

Server Unavailable.

This is the error most of us have faced without a clue asto what leads to this error and server being unavailable.Heres the reason:

ASP.NET uses threads from a common language runtime (CLR) thread pool to process requests. As long as there are threads available in the thread pool, ASP.NET has no trouble dispatching incoming requests. But once the thread pool becomes saturated, i.e. all the threads inside it are busy processing requests and no free threads remain, new requests have to wait for threads to become free. If the logjam becomes severe enough and the queue fills to capacity, ASP.NET throws this error stating that Server is Unavailable.

SO whats the solution: Well the easiest way is to increase the maximum size of the thread pool, allowing more threads to be created. That's the course developers often take when repeated "Server unavailable" errors are reported. Another common course of action is adding more servers to the Web farm. But increasing the thread count-or the server count-doesn't solve the issue. It just provides temporary relief to the problem.

One solution to this implemented in ASP.NET 2.0 is the use of ASYNCHRONOUS PAGES.

When ASP.NET receives a request for a page, it grabs a thread from a thread pool and assigns that request to the thread. A normal, or synchronous, page holds onto the thread for the duration of the request, preventing the thread from being used to process other requests. If a synchronous request becomes I/O bound—for example, if it calls out to a remote Web service or queries a remote database and waits for the call to come back—then the thread assigned to the request is stuck doing nothing until the call returns. That impedes scalability because the thread pool has a finite number of threads available. If all request-processing threads are blocked waiting for I/O operations to complete, additional requests get queued up waiting for threads to be free. At best, throughput decreases because requests wait longer to be processed. At worst, the queue fills up and ASP.NET fails subsequent requests with 503 "Server Unavailable" errors.

Asynchronous pages offer a neat solution to the problems caused by I/O-bound requests. Page processing begins on a thread-pool thread, but that thread is returned to the thread pool once an asynchronous I/O operation begins in response to a signal from ASP.NET. When the operation completes, ASP.NET grabs another thread from the thread pool and finishes processing the request. Scalability increases because thread-pool threads are used more efficiently. Threads that would otherwise be stuck waiting for I/O to complete can now be used to service other requests. The direct beneficiaries are requests that don't perform lengthy I/O operations and can therefore get in and out of the pipeline quickly. Long waits to get into the pipeline have a disproportionately negative impact on the performance of such requests

The concept of Asynchronous Pages is available only in ASP.NET 2.0 but it could be implemented in ASP.NET 1.x in a way outlined in the below mentioned link.

The trick here is to implement IHttpAsyncHandler in a page's codebehind class, prompting ASP.NET to process requests not by calling the page's IHttpHandler.ProcessRequest method, but by calling IHttpAsyncHandler.BeginProcessRequest instead.

ASP.NET 2.0 vastly simplifies the way you build asynchronous pages. You begin by including an Async="true" attribute in the page's @ Page directive, like so:

<%@ Page Async="true" ... %>

This property set to true, says the page to implement the IHttpAsyncHandler. Regarding this, you need to register the Begin method and End method of to the Page.AddOnPreRenderCompleteAsync.

// Register async methods
new BeginEventHandler(BeginAsyncOperation),
new EndEventHandler(EndAsyncOperation)

By these actions, the starts its normal life cycle, until the end of the OnPreRender event invocation. At this point the ASP.NET calls the Begin method that we registered earlier and the operation begins (calling the database etc...), meanwhile, the thread that has been assigned to the request goeas back to the thread pool. At the end of the Begin method, an IAsyncResult is being sent automatically to the ASP.NET and let it determine in the operation had completed, a new thread is being called from the thread pool and there is call to the End method (that we registered earlier, remmember?).

Jeff Prosise explains it all in

-- Ashutosh