This blog is about the dotnet.all types of codes,news about dotnet including asp.net,vb.net,c# and know about new dotnet technology.programing in asp.net,vb.net,c#, ajax, AJAX tech support for .net and discuss the new technology in dotnet.ncluding asp.net,vb.net,c# and know about new dotnet technology.programing in asp.net,vb.net,c#, ajax, AJAX tech support for .net and discuss the new technology in dotnet.asp.net programming,dot net programming,dotnet programs,dotnet source code,source code.

Free Hosting

Free Hosting

Friday, October 31, 2008

Creating print preview page dynamically in ASP.NET(source code)





Download source files - 1 KB

Download demo project - 18.9 KB

Introduction
If you want to show a print preview page before printing any page, then you have to make a page like the one that currently is showing.

Or, if you want to print a particular section of that page, like only a DataGrid, HTML table, or any other section, and you also need to preview that in a separate page before printing, you have to create a separate print preview page to show, which is more difficult for you.

I have introduced a technique to avoid this problem. You do not need to create a separate page for print preview. You just use the JavaScript code in Script.js to create a print preview page dynamically. It will take less time to implement and so is faster. Hopefully, it will be helpful for you.

Background
I was developing a report module in my existing project. The report contents are generated dynamically by giving input (like generate by status, date range, etc). And, there is a print button to print. My client wanted to view a print preview page before printing, but we had already completed this module then. It was a really hard situation for my developers to build a print preview page for all the reports. I got this idea during that situation.

Using the code
You will just add the Script.js file in your project. The following code has been written in that file.

The getPrint(print_area) function takes the DIV ID of the section you want to print. Then, it creates a new page object and writes the necessary HTML tags, and then adds Print and Close buttons, and finally, it writes the print_area content and the closing tag.

Call the following from your ASPX page. Here, getPrint('print_area') has been added for printing the print_area DIV section. print_area is the DIV ID of the DataGrid and the other two will work for others DIVs. Whatever areas you want to print must be defined inside of DIV tags. Also include the Script.js file in the ASPX page.

Download the source code to get the getPrint() function.

I have used the following code in the demo project to generate a sample DataGrid:

Private Sub PopulateDataGrid()
'creating a sample datatable
Dim dt As New System.Data.DataTable("table1")
dt.Columns.Add("UserID")
dt.Columns.Add("UserName")
dt.Columns.Add("Phone")
Dim dr As Data.DataRow
dr = dt.NewRow
dr("UserID") = "1"
dr("UserName") = "Ferdous"
dr("Phone") = "+880 2 8125690"
dt.Rows.Add(dr)
dr = dt.NewRow
dr("UserID") = "2"
dr("UserName") = "Dorin"
dr("Phone") = "+880 2 9115690"
dt.Rows.Add(dr)
dr = dt.NewRow
dr("UserID") = "3"
dr("UserName") = "Sazzad"
dr("Phone") = "+880 2 8115690"
dt.Rows.Add(dr)
dr = dt.NewRow
dr("UserID") = "4"
dr("UserName") = "Faruk"
dr("Phone") = "+880 2 8015690"
dt.Rows.Add(dr)
DataGrid1.DataSource = dt
DataGrid1.DataBind()
End Sub


Use the following code in a separate style sheet page. See PrintStyle.css if you want to hide the Print and Close buttons during printing.

#PRINT ,#CLOSE
{
visibility:hidden;
}

Using ZIP content for delivery over HTTP (ASP.NET Source Code)

Download source code - 48.1 KB

Introduction :
Static content such as HTML and text files, styles, and client scripts can be compressed to reduce network usage. This article shows how to use already compressed content for transmission over HTTP protocol.

Background
HTTP Request and Response
An HTTP session consists of pairs of requests and responses. Every request and response has a header block that contains metadata about the content. The headers can specify the content type, the encoding, and the cache parameters of the transmitted information. We are interested in the Accept-Encoding header of an HTTP request and the Content-Encoding header of an HTTP response. Most web servers do not set the Content-Encoding header, and the HTTP communication happens as shown on Figure 1. The requested content is transmitted as is.




To save network bandwidth, a web server can be configured to compress (encode) a requested content. Most web browsers and search engine bots support the DEFLATE compression encoding [1]. You may notice Accept-Encoding is set to "gzip,deflate" that passed from your browser to a web server. We are interested in the DEFLATE encoding.

The compression comes with a price: initial response delay and more computation power is required from a web server; that decreases the number of concurrent users the web server may serve at the same time. The problem can be solved by adding a compressed content cache (see Figure 2) or by using pre-compressed data.




The second way looks more attractive – it does not require any processing power from the web server; but, it requires more work upfront such as compression of the content. That gives an additional headache for web designers and content authors when they are trying to publish their content.

ZIP File Format
The ZIP is one of the widely used compression formats. A ZIP file contains multiple files that are compressed by numerous archival methods. The most used one is DEFLATE. The file structure can be presented as two parts, the compressed data and a directory. [2] The compressed data contains pairs of local file headers and compressed data, the directory contains additional file attributes and references to local file headers.



We can use the data that was compressed by DEFLATE or no-compression methods. The DEFLATE'd data can be send over HTTP without additional re-encoding – the data is already compressed. (See Figure 4.) The Content-Encoding header has to be set to "deflate" for the HTTP response to tell a web browser that the content is encoded.



Using the Code
The solution has two parts: a utility library and a web application. The utility library contains a configuration section, web cache, path rewrite module, and ZIP reader classes. The cache classes and path rewrite module can be used only within the web application context.

The web application contains baseline implementations of the HTTP handler (httpzip.ashx) that lists and delivers contents of the registered zip folder. The handler accepts three query string parameters:

name – refers to the registered ZIP archive;
action – list or get;
file – path of the file in the ZIP archive.
To get the rfc1951.txt file from the archive that is registered as deflate-rfcs, use the following URL:

http://servername/path/httpzip.ashx?name=deflate-rfcs&action=get&file=rfc1951.txt

The ZIP files and the rewrite module can be registered in the web.config file:

For convenience, the path rewrite HTTP module is included in the utility library. It can be registered in the web.config file:


To get the rfc1951.txt file from the archive with a prefix rfcs, use the following URL:

http://servername/path/rfcs/rfc1951.txt



READ MORE


Download source code - 48.1 KB

Thursday, October 30, 2008

Troubleshooting IIS Access Problems - IIS Tutorials

Introduction:Spending countless hours developing a Web site only to discover that no one can access it is frustrating. This article guides you through the process of troubleshooting Web-site access problems.

Possible Causes of Failed Access
Some of the more common causes of access troubles include broken network links, incorrect firewall settings, and IIS permission problems. The general networking-type problems tend to be easy to figure out. For example, if no traffic is able to flow in or out of your network, then there's a good chance that there's a broken network link somewhere. Likewise, if inbound traffic is flowing, but no one can access your Web site, some simple port sniffing can tell you if TCP port 80 is blocked on your firewall.
Depending on the responses that I receive to this article, I may write a full-fledged article on connection troubleshooting in the future. For now though, I'll focus this article on IIS access problems that are related to permission problems.

Setting Up the Security Log:
The first step in troubleshooting IIS connection problems is to have a clear understanding of what's really going on. A big part of doing so involves reading your event logs. However, without some tweaking on your part, the event logs may not display information that is helpful to you.

Since we're talking about IIS access problems that are related to permissions, we'll be working predominantly with the Security Log. Reconfiguring the Security Log involves telling IIS which information to log, stopping the IIS services, clearing any existing Security Log entries, and finally, restarting the IIS services. In case you're wondering, the reason for stopping the IIS services is because sometimes IIS caches security log information. Unless you stop and restart the services, it's possible for cached security information to show up in the Security Log even after you've cleared the existing log contents. Obviously this cached information can be misleading since it appears to be current. Therefore, I strongly recommend stopping and restarting the IIS services as a part of the Security Log configuration process.

Begin the configuration process by selecting the Computer Management command from the Programs Administrative Tools menu. Next, navigate through the Computer Management console tree to Services and Applications Internet Information Services. Expand the Internet Information Services container to reveal the Web sites beneath it. Right click on the Web site that you're having trouble with and select the Properties command from the resulting context menu. When you do, you'll see the Web site's properties sheet.Now, select the properties sheet's Web Site tab and select the Enable Logging check box. When you do, you'll have a choice of various log file formats. I recommend using the W3C Extended Log File Format. Click the Properties button to reveal the Extended Logging Properties sheet.

By default, the properties sheet's General Properties tab will be selected. This tab allows you to control how often a new log file will be created. How often you build a new log file is really a matter of personal preference, so whatever you want to choose is fine. More important is the Extended Properties tab. This tab allows you to select which pieces of information will be included in your log file entries. You may select whichever elements you want, but at a minimum the log entries should include the following elements:

Date, Time, Client IP Address, User Name, Method, HTTP Status, and Win32 Status.

When you've made your selections, click OK twice to return to the main Computer Management console screen.

Now that you've configured the Web site's logging options, it's time to clear the cache and clear any existing log entries. The first step in doing so is to stop the various IIS services. To do so, open a Command Prompt window by selecting the Command Prompt command from the Programs Accessories menu. Next, enter the following command:

NET STOP IISADMIN /Y

This single command will stop all of the IIS services. Once the services have stopped, leave the Command Prompt window open and open the Event Viewer by selecting the Event Viewer command from the Programs Administrative tools menu. When the Event Viewer opens, right click on the Security Log and select the Clear All Events command from the resulting context menu. Now, that you've cleared the cache and the Security Log, it's time to restart IIS. Return to the Command Prompt window and enter the following commands:

NET START W3SVC
NET START MSFTPSVC
NET START NNTPSVC
NET START SMTPSVC

Keep in mind that not all of these commands will apply to all servers. For example, if you aren't running the FTP service, then you can ignore the command that deals with FTP.

Checking the Security Log
Now that you've configured the Security Log, it's time to begin creating some log entries. To do so, try to access the Web site that's having problems. I recommend attempting to access the Web site from both inside and outside of the organization, and from a variety of computers, if possible. Doing so should give you some very useful log entries that you can compare against each other to determine the true nature of the problem. For example, you may discover that the Web site works correctly when accessed from inside the organization, but not when accessed from the outside. Another possibility is that the site may work fine for authenticated users, but not for anonymous users.

As you compile Security Log entries, the first thing that I recommend doing is scanning the log entries for 401 and 403 errors. There are a variety of 401 and 403 error codes, but knowing the exact error codes that are being generated can provide you with some excellent clues about the cause of the problem. Below I've listed the various 401 and 403 error codes and what these codes mean:

401;1 Unauthorized access because the logon has failed
401;2 Unauthorized access because the logon has failed due to the server
configuration
401;3 Unauthorized access because of an Access Control List (ACL) entry
401;4 Unauthorized access because an IIS filter is blocking access
401;5 Unauthorized access because of an ISAPI or CGI application
403;1 Forbidden because execute access isn't allowed
403;2 Forbidden because read access isn't allowed
403;3 Forbidden because write access isn't allowed
403;4 Forbidden because SSL use is required
403;5 Forbidden because 128-bit SSL use is required
403;6 Forbidden because the IP address was rejected
403;7 Forbidden because a client certificate is required
403;8 Forbidden because access to the site is denied
403;9 Forbidden because too many users are presently attached to the site
403;10 Forbidden because of an invalid configuration
403;11 Forbidden because of an invalid password
403;12 Forbidden because the Web site requires a valid client certificate
403;13 Forbidden because the client certificate was revoked
403;14 Forbidden because the directory listing is denied
403;15 Forbidden because the client access license count was exceeded
403;16 Forbidden because the client access certificate is invalid or untrusted
403;17 Forbidden because the client access certificate is expired or is not yet valid

With luck, checking your Security Log for 401 and 403 errors and comparing any errors that you might find against my list of error codes has helped you to narrow down the cause of your problems. If you still need some help, however, check out the sections below. They deal with specific types of permissions issues and how to fix them.

Conclusion

As you can see, IIS permissions-related problems can be a bit tricky. However, by taking a logical approach to these problems, you can easily solve them.

Transaction Processing in ADO.NET 2.0 - (Dotnet Tutorials)

Introduction:
It seems like just yesterday that Microsoft introduced a brand new data access technology that brought a ton of power as well as a decent sized learning curve. When ADO 2.xxx turned into ADO.NET, things changed dramatically. It took some getting used to to feel comfortable using the whole 'disconnected' model, but the "cool" factor made it all worth while. When .NET Framework 1.1 came out, very little changed in regard to what you needed to learn or what you could do with it. Well, we're turning another corner and right off in the distance is ADO.NET 2.0. The differences between ADO.NET 2.0 and < ADO.NET 2.0 are pretty profound, and you'll definitely have to spend some time learning new features if you want to take advantage of its power. I think you'll find that it's well worth the effort.
So you're probably thinking "Oh wow, big deal, new features available in a new release of the product, I never would have guessed." Well, there's a lot of cool new features all over the place, too much to discuss in a single article, but one area that really stands out is transaction processing. To say that there's a lot of bang for the buck would be a real understatement, and if you've had to work extensively with transactions, both local and/or distributed in the past, I think you'll really be impressed with what Microsoft has done.

Transactions :
One of the more significant areas of improvement is in transaction processing. It's still early in beta so nothing is written in stone, but by and large things got a LOT easier. In the original versions of ADO.NET, you could implement transactions a few different ways. If your implementation context was a single database, you could instantiate an instance of one of the IDBTransaction objects, attach it to your connection, process what you wanted, and either commit or rollback depending on the results. By virtue of the fact that this was done client side, many people found that it wasn't all that they were hoping. A similar method would entail rolling your transaction processing into a stored procedure and simply invoking the procedure. On the whole I think this produced some more reliable results, but it had some problems too - namely that it was highly coupled with the specific database implementation you were using. So if you needed to move a file for instance, send a success message to a MSMQ Message Queue, and then update a SQL Server database, you were going to have to do a fair amount of work. This process has been simplified so much it's hard to believe it actually works. Anyway, I'll dive into an example in a second, but let me make sure that the distinction I'm about to draw is clear: Now, just as before, you have two choices with regard to transactions, Local and Distributed. Distributed transactions span multiple items whereas local transactions typically span just one. Either way you can take advantage of the TransactionScope object to simplify your life.

Simple Transaction Under ADO.NET 2.0:
bool IsConsistent = false;
using (System.Transactions.TransactionScope ts = new System.Transactions.TransactionScope())
{
SqlConnection cn = newSqlConnection(CONNECTION_STRING );
string sql = "DELETE Categories";
SqlCommand cmd = newSqlCommand(sql, cn);
cn.Open();
cmd.ExecuteNonQuery();
cn.Close();
//Based on this property the transaction will commit if
//successful. If it fails however, this property will
//not be set and the transaction will not commit.
ts.Consistent = IsConsistent;
}
Basically, I created a query which whacked and entire table, wrapped it in a transaction and ensured that it wouldn't commit. In doing so, the table remains fully in tact just as it was before calling ExcecutNonQuery. Now, what's so different about this? Well, notice that the connection itself is confined within the scope so it automatically participates in the transaction. All that is required to commit or rollback the transaction is specifying True or False for consistent. A more realistic example can be illustrated by making a few minor changes:

A Slightly Improved Implementation:
bool IsConsistent = false;
using (System.Transactions.TransactionScope ts = new System.Transactions.TransactionScope())
{
SqlConnection cn = newSqlConnection(CONNECTION_STRING );
string sql = "DELETE Categories";
SqlCommand cmd = newSqlCommand(sql, cn);
cn.Open();
try
{
cmd.ExecuteNonQuery();
IsConsistent = true;
}
catch (SqlException ex)
{
//You can specify additional error handling here
}
cn.Close();
//Again, since this was set to false originally it will only
//commit if it worked.
ts.Consistent = IsConsitent;
}
This example is more in line with the earlier version of ADO.NET's transaction processing, namely, if everything works then commit, else rollback. This is hardly climactic in any practical sense because even though it's a lot more concise than previous versions, you're not really talking about any dramatic reduction in complexity of code. To see the elegance and power of this object you really need to examine a distributed scenario. Say that you have some really complex situation where you have a table in a Yukon database that you want to clear, and then you have a corresponding table in a separate database that needs to be cleared as well. Furthermore, assume that this is an all or nothing deal and there has to be complete success or complete failure.
bool IsConsistent = false;
using (TransactionScope ts = newTransactionScope())
{
using (SqlConnection cn = newSqlConnection(YUKON_CONNECTION_STRING))
{
string sql = "DELETE Products";
SqlCommand cmd = newSqlCommand(sql, cn);
cn.Open();
try
{
cmd.ExecuteNonQuery();
using(SqlConnection cnn = newSqlConnection(CONNECTION_STRING))
{
string sql_2 = "DELETE Categories";
SqlCommand cmd2 = newSqlCommand(sql_2, cnn);
cnn.Open();
cmd.ExecuteNonQuery();
cnn.Close();
}
IsConsistent = true;
}
catch (SqlException ex)
{
//You can specify additional error handling here
}
cn.Close();
}
ts.Consistent = IsConsistent;
}
Now, what I'm about to discuss is pretty amazing, and I can't in clear conscience take credit for it. Angel Saenz-Badillos was the first one to tip me off to how all of this works and worked with me through a few examples. It's laughable at the moment, but the first time I heard of this, my initial response was something like "Ok, that'll save me 3 lines of code - great" I couldn't believe that it could possibly live up to the hype, and it took working with it a few times before my little brain could process it.
So here's the deal stated simply. Wrap everything in a TransactionScope object, and it takes care of everything else for you. What does that mean? Well, it will determine if you need a local or a distributed transaction, and it will react accordingly. It will enlist where necessary and process locally otherwise. Notice that the first connection string points to a Yukon (SQL Server 2005) database. As such, you can take advantage of "Delegation". This is a fancy way of saying "We don't need no stinking distributed transaction, we're using Yukon" and thereafter not using it unless it becomes necessary. Now, if you cut out the inner statements where you fire the query pointing to ANOTHER database, everything would be done under the purview of a local transaction. However, as soon as we try to hit another database, we're back in distributed transaction mode. Now, the natural assumption is that they are run under two different contexts, right? After all, you need to promote to DT mode once you try to hit the second database, but prior to that you were running locally. Actually, the answer is NO, you don't need to do squat. That's what's so amazing about it. As soon as the code gets to a point where it won't be running locally, everything is promoted accordingly. And you don't just have support for SQL Server here - Oracle and MSMQ are both currently supported, and there's a REALLY strong probability that File System support will be included in the final release.
So, does the same principle apply here if you were connecting to Oracle or MSMQ instead of SQL Server 2000? Yes, and for all intents and purposes the transactional component here would behave identically. If you've used COM+ before, then you no doubt realize how much easier this is. If you haven't, just put in Distributed Transaction COM+ into Google or read up on it, and you'll quickly see how much more simple this makes things. Even if you aren't familiar with either of those scenarios, just look to the unstable nature of client side transaction processing with ADO.NET and you'll quickly see this is pretty darned impressive.
As cool as this is, there's no doubt some folks out there won't be impressed. Well, fine. You aren't precluded from doing anything you otherwise would by employing the TransactionScope; heck you don't even have to use it. If you like writing tons of code, and you get a sense of security by doing unnecessary tasks, knock yourself out. Or even if you're not that much of a hard-core ludite, but you want to do things manually, here's how you do it:

Client Side Transaction under 1.x Framework:
privatebool OldSchool()
{
bool IsConsistent = false;
ICommittableTransaction oldSchoolTrans = Transaction.Create();
using (SqlConnection cn = newSqlConnection(CONNECTION_STRING))
{
string sql = "DELETE Categories";
SqlCommand cmd = newSqlCommand(sql, cn);
cn.Open();
cn.EnlistTransaction((ITransaction)oldSchoolTrans);
try
{
cmd.ExecuteNonQuery();
IsConsistent = true;
returntrue;
}
catch (SqlException ex)
{
//You can specify additional error handling here
//This is where you’d rollback your transaction
return (ex.ToString().Length < 1);
}
cn.Close();
}
}


Conclusion :
Anyway, as you can see, Transactions got a lot different in ADO.NET 2.0 and by different I mean unequivocally better. Ten years ago it wasn't uncommon to work in a small company that didn't have a very sophisticated network if they had one at all. Flat files and/or isolated data stores were pretty common. Message Queues? As the landscape evolved so did the sophistication requirements associated with data manipulation, and all of a sudden smaller and resource limited companies starting having features available to them that were previously only in the realm of the larger companies. And with the advent of features like the TransactionScope, its current support for Oracle, Microsoft's SQL Server and MSMQ (and if all goes well, File System support under Windows), sophisticated transaction processing will get much more proximate to a lot of people.

How to make a mail enabled contact in C#(C-Sharp .net code)

Introduction:
I spent quite a while finding and converting code-snippets to make an e-mail enabled contact. It seemed to me like it should be a relatively easy task to carry out, but I was missing the intricacies of CDO vs. ADSI users. There is a tech-note on Microsoft which is quite clear, but you have to have a bit of CDO experience to decode what it means. The tech-note is available here.

The key pieces of information are:

1. You cannot use ADSI or an ADSI object to create a Mail Enabled anything.
2. You must use CDO and CDOEXM and ADODB.
3. You can only develop this on an Exchange server.
4. You can only deploy it on an Exchange server.
The tech-note states that you can develop it on a Win200x machine with the Admin tools installed. This does not work. The reason is that the ADODB lib from Admin tools are not compatible with ADODB on .NET. You get a nasty error when trying to add a reference to CDOEXM with Admin Tools only. On a real Exchange Server, this does not seem to be a problem.

Eventually I downloaded the Exchange 2003 SDK and found the VB code for making a Mail Enabled Person. I then converted that into C#. That code can be found at MSDN.

A fairly serious fault I found with the original code one day later was that you must make sure that there will be no duplicate addresses in your domain. People were getting NDR reports, because of duplicate e-mail addresses.

LDAP could be used to repopulate so many things, and there are many attributes that contain e-mail addresses, that I was not sure how to test for "exchange enabled". Searching Proxy Addresses seemed to be a good way, because I read on tech boards that Proxy Addresses is what Exchange looked up to send email addresses. I found a relative tech-note that seemed to support this idea.

The way I chose to do it was to search all objects (this really takes a lot of processor and time). Because you can have mail enabled folders, contacts, persons, etc... I thought this the best way, as you might have "mail enabled" toasters ending up in Active Directory sometime in the future: sendto:popup@mytoaster.com, you never know.

As far as function naming goes, I always like to ask questions in the positive so the code reads easier (in my opinion). I think it is more understandable to write:

if (! EmailAddressExistInLDAP( "bsmith@mycompany", strRootDSE)

than to write:

if ( NotInAddressBook( ... ))

When you want to write the opposite, the logic can be difficult to follow.

if ( ! NotInAddressBook( ... ))

However in this case it does mean the function returns "true" to not add the contact, and that might be confusing reading the EmailAddressExistInLDAP through the first time.

This code takes about 5 minutes to run with the 600 contacts I am transferring so the *emailaddress* is an "expensive" way to go about it. If anyone knows something faster and better, I certainly would be interested.

Obviously as with all other freeware code in the world, this is as is and I recommend that you test it thoroughly in a development environment before releasing it on the real world.

You will also need to have created an ADSI contact, before this can MailEnable that contact. There are lots of code articles around about making a mail enabled contact.

Here is the code snippet:
using CDOEXM;

// strLDAPcn is an LDAP URL "LDAP://cn=Bill Smith,
// ou=Connecticut,ou=Partners,DC=mynetwork,DC=com"
// strMailAddress is an valid Exchange
// address "SMTP:bsmith@mypartnercompany.com
// strRootDSE is the Root DSE Example "DC=mynetwork,DC=com"


private bool MailEnablePerson(string strLDAPcn,
string strMailAddress, string strRootDSE)
{
// Create a CDO Person and a Recipient Object

CDO.Person cdoPerson = new CDO.PersonClass();
CDOEXM.IMailRecipient cdoRecipient;
bool bRetVal = false;

// set the parameters and fetch the user. The contact must already exist.

cdoPerson.DataSource.Open(strLDAPcn,null,
ADODB.ConnectModeEnum.adModeReadWrite,
ADODB.RecordCreateOptionsEnum.adFailIfNotExists,
ADODB.RecordOpenOptionsEnum.adOpenSource,"","");

// Cast the person onto the recipient

cdoRecipient = (IMailRecipient) cdoPerson;

// If the user already has an SMTP mail property,
// don't Email Enable them.
// "SMTPmail", is normally set by exchange
// where the property "email" is
// the one that is in use from a contact.


if (cdoRecipient.SMTPEmail == "")
{

// If the user does not have an SMTP address in LDAP then MailEnable them.
// The contact must already exist as an basic LDAP entry.

if (! EmailAddressExistInLDAP(strMailAddress,strRootDSE) )
{
// The key call to CDO, and then Save it
// I am not sure why CDO.Person is saved but it was in the
// Microsoft code, so I left it.

cdoRecipient.MailEnable(strMailAddress);
cdoPerson.DataSource.Save() ;
bRetVal = true;
}
}
return bRetVal ;
}

// EmailAdressExistInLDAP. As per the Technote,
// proxyAddresses = Exchange Email Addresses
// This takes two parameters

// strMailAddress the Email address example: bsmith@mypartnercompany.com
// strRootDSE the Root DSE example: DC=mycompany,DC=com

private bool EmailAddressExistInLDAP(string strMailAddress,
string strRootDSE)
{
// Set the return value to True
// True means that the Active Directory would NOT be updated and
// I prefer the default to be "don't do"

bool bRetVal = true;

// Set a objSearch starting at RootDSE, and a place to return it.

System.DirectoryServices.DirectorySearcher objSearch
= new DirectorySearcher(strRootDSE);
System.DirectoryServices.SearchResult objResult;

// Filter only on the proxyAddress

objSearch.Filter = "(& ( proxyAddresses=*"+strMailAddress+"*))";

// if we even find one, we can't add another.
// This is a slow way to look, but
// it is better than having two Exchange Proxy
// Address's and getting NDR's.

objResult = objSearch.FindOne();

if (objResult == null)
bRetVal = false;
return bRetVal;
}

How To Get IP Address Of A Machine - C-Sharp(.net code)

Introduction
This article is not a technical overview or large discussion. It is like a collection of tips on how you can get the IP address or host name of a machine. In the Win32 API this could be accomplished using the NetWork API. And this is still true in the .NET framework. The only difference is finding and understanding what namespace and class to use to accomplish this task. In the .NET framework the NetWork API is available in the System.Net namespace. The DNS class in the System.Net namespace can be used to get the hostname of a machine or get the IP address if the hostname is already known. The DNS class provides a simple domain name resolution functionality. The DNS class is a static class that provides access to information from the Internet Domain Name System (DNS). The information returned includes multiple IP addresses and aliases if the host specified has more than one entry in the DNS database. The list is returned as a collection or an array of IPAddress objects. The following section is the code that shows how to obtain the IP address for a given host name.

DNSUtility Code
namespace NKUtilities
{
using System;
using System.Net;

public class DNSUtility
{
public static int Main (string [] args)
{

String strHostName = new String ("");
if (args.Length == 0)
{
// Getting Ip address of local machine...
// First get the host name of local machine.
strHostName = DNS.GetHostName ();
Console.WriteLine ("Local Machine's Host Name: " + strHostName);
}
else
{
strHostName = args[0];
}

// Then using host name, get the IP address list..
IPHostEntry ipEntry = DNS.GetHostByName (strHostName);
IPAddress [] addr = ipEntry.AddressList;

for (int i = 0; i < addr.Length; i++)
{
Console.WriteLine ("IP Address {0}: {1} ", i, addr[i].ToString ());
}
return 0;
}
}
}

How to Create Birthday Reminders Using Microsoft Outlook, in C#(C-Sharp)

Introduction
I guess it isn't rocket science to put birthday reminders into Outlook, but nevertheless, doing it in C# code cost me more effort than it should have. So without further ado, here is the code.

Steps
1. Ensure you reference Microsoft Outlook, then create a new application.
Outlook._Application olApp =
(Outlook._Application) new Outlook.Application();

2. Log on. (I think email needs to be running)
Outlook.NameSpace mapiNS = olApp.GetNamespace("MAPI")
string profile = "";
mapiNS.Logon(profile, null, null, null);


3. Repeat the line.
CreateYearlyAppointment(olApp, "Birthday",
"Kim", new DateTime(2004, 03,08, 7, 0, 0));
for your wife and kids etc. etc.!!!

The Code
static void CreateYearlyAppointment(Outlook._Application olApp,
string reminderComment, string person, DateTime dt)
{
// Use the Outlook application object to create an appointment
Outlook._AppointmentItem apt = (Outlook._AppointmentItem)
olApp.CreateItem(Outlook.OlItemType.olAppointmentItem);

// set some properties
apt.Subject = person + " : " + reminderComment;
apt.Body = reminderComment;

apt.Start = dt;
apt.End = dt.AddHours(1);

apt.ReminderMinutesBeforeStart = 24*60*7 * 1; // One week reminder

// Makes it appear bold in the calendar - which I like!
apt.BusyStatus = Outlook.OlBusyStatus.olTentative;

apt.AllDayEvent = false;
apt.Location = "";

Outlook.RecurrencePattern myPattern = apt.GetRecurrencePattern();
myPattern.RecurrenceType = Outlook.OlRecurrenceType.olRecursYearly;
myPattern.Interval = 1;
apt.Save();
}

CCRing - C-Sharp(C# Code)

Introduction
I often revisit applications I've written to improve areas of the code with ideas and lessons I pickup over time. There seems to always be one primary goal and this is to improve performance. When it comes to improving performance there are many things you can do but in the end you'll always look to multithreading. It's theoretically the simplest suggested concept but not always the easiest to implement. If you've ever run into resource sharing problems you'll know what I mean and although there are many articles on how to do it, it doesn't always mesh with every solution.

Some time ago I came across something called the CCR, it was like magic code created by two wizards Jeff Richter and George Chrysanthakopoulos. Part of the magic was to properly roll the syllables in Chrysanthakopoulos neatly off your tongue in one breath and when you get past that you'll see the light at the end of the multithreaded hallway of horrors. This managed DLL is packed with oodles of multithreaded fun and provides many levels of simplicity to common threading complexities. In other words, if you want to improve performance of your applications by implementing a multithreaded layer then you need to live and breathe the CCR. For some great background and fun grab some popcorn and visit Jeff and George at this link.

Background
After watching the video cast you should come away with some confidence and revelation along with some courage to start using the CCR. So you'll open up your latest project and... where to you start? Well one place you can start is by creating a simple asynchronous logger. Most applications I design have varying levels of logging for production diagnosis but if you don't use a threaded model when utilizing your logger class then you've created blocking code and obvious room for improvement. So to get you started I'll show you how to implement a CCR'd logger class that writes to a local file. There are many ways to log data but for this demo I'm using simple local logging. You will most likely be interested in this article; it will explain the many faces of the CCR.

Using the Code
The following code can be dropped into your application and be utilized right away and although basic it can act as a replacement for any logging methods you currently implement.

The first thing we need to do is to new up something called a Dispatcher, think of this as the thread "pool". Notice the "1", this means we only want one thread handling these calls therefore all "posts" to the class will execute async but sequential. If you're writing to a SQL database you can try increasing this number but be aware that data may not arrive sequentially! When utilizing a dispatcher for other non sequential tasks, try increasing this number.

//Create Dispatcher
private static readonly Dispatcher _logDispatcher = new Dispatcher(1, "LogPool");
Secondly you'll want a DispatcherQueue. The DQ manages your list of delegates to methods, methods you need to execute when needed.

//Create Dispatch Queue
private static DispatcherQueue _logDispatcherQueue;
Next you need a PORT, ports are like input queues. You'll "post" to ports to invoke your registered methods.

//Message Port
private static readonly Port _logPort = new Port();
Now for the class, dont' forget to include the CCR in the directives!

Collapse
using System;
using System.IO;
using System.Threading;
using Microsoft.Ccr.Core;

namespace CCRing_Demo
{
public static class DataLogger
{
//Create Dispatcher
private static readonly Dispatcher _logDispatcher = new Dispatcher(1,
ThreadPriority.Normal, false, "LogPool");

//Create Dispatch Queue
private static DispatcherQueue _logDispatcherQueue;

//Message Port
private static readonly Port _logPort = new Port();

//Fields
private static string _logFileName;

private static void Init()
{
_logDispatcherQueue = new DispatcherQueue("LogDispatcherQueue",
_logDispatcher);
Arbiter.Activate(_logDispatcherQueue, Arbiter.Receive(true, _logPort,
WriteMessage));

_logFileName = "DMT_Message_Log_" + String.Format("{0:yyMMddHHmmss}",
DateTime.Now) + ".log";
}

private static void WriteMessage(string messageString)
{
var sw = new StreamWriter(_logFileName, true);
sw.WriteLine("[" + String.Format("{0:HH:mm:ss tt}", DateTime.Now) + "]" +
messageString);
sw.Close();
}


public static void Log(string messageString)
{
if (String.IsNullOrEmpty(_logFileName))
Init();
_logPort.Post(messageString);
}

//Any thread tasks still running?
private static bool PendingJobs
{
get
{
return (_logDispatcher.PendingTaskCount > 0) ? true : false;
}
}

//Since we are not using background threading we need to add this method to
//dispose the DQ for application end
public static void StopLogger()
{
while (PendingJobs){Thread.Sleep(100);}
_logFileName = null;
_logDispatcherQueue.Dispose();
}
}
}


SOURCE CODE:

CLICKHERE TO DOWNLOAD CCRing_Demo - 97.73 KB

How To Manually Create A Typed DataTable in C-Sharp(C#)

Introduction
I am writing this article because there is not a lot of information on the web about using the DataTable.GetRowType() method, and the code examples that I found were plain wrong or incomplete. Furthermore, there doesn't appear to be any automated tools for creating just a typed DataTable--instead there are tools for creating a typed DataSet. In the end, I ended up creating a typed DataSet simply to figure out what I was doing wrong with my manually created typed DataTable. So this is a beginner article on what I learned, and the purpose is to provide an example and correct information as resource for others. I don't provide a tool for creating a type DataTable, that might be for a future article.

What Is A Typed DataTable?

A typed DataTable lets you create a specific DataTable, already initialized with the required columns, constraints, and so forth. A typed DataTable typically also uses a typed DataRow, which lets you access fields through their property names. So, instead of:

DataTable personTable=new DataTable();
personTable.Columns.Add(new DataColumn("LastName"));
personTable.Columns.Add(new DataColumn("FirstName"));

DataRow row=personTable.NewRow();
row["LastName"]="Clifton";
row["FirstName"]="Marc";
Using a typed DataTable would look something like this:

PersonTable personTable=new PersonTable();
PersonRow row=personTable.GetNewRow();
row.LastName="Clifton";
row.FirstName="Marc";
The advantage of a type DataTable is the same as with a typed DataSet: you have a strongly typed DataTable and DataRow, and you are using properties instead of strings to set/get values in a row. Furthermore, by using a typed DataRow, the field value, which in a DataRow is an object, can instead be already cast to the correct type in the property getter. This improves code readability, eliminates the chances of improper construction and typos in the field names.

Creating The Typed DataTable
To create a typed DataTable, create your own class derived from DataTable. For example:

public class PersonTable : DataTable
{
}
There are two methods that you need to override: GetRowType() and NewRowFromBuilder(). The point of this article is really that it took me about four hours to find out that I needed to override the second method.

protected override Type GetRowType()
{
return typeof(PersonRow);
}

protected override DataRow NewRowFromBuilder(DataRowBuilder builder)
{
return new PersonRow(builder);
}
That second method is vital. If you don't provide it, you will get an exception concerning "array type mismatch" when attempting to create a new row. It took me hours to figure that out!

Creating The Typed DataRow
Next, you need a typed DataRow to define the PersonRow type referenced above.

public class PersonRow : DataRow
{
}
Constructor
The constructor parameters, given the NewRowFromBuilder call above, is obvious, but what is less obvious is that the constructor must be marked protected or internal because the DataRow constructor is marked internal.

public class PersonRow : DataRow
{
internal PersonRow(DataRowBuilder builder) : base(builder)
{
}
}
Filling In The Details
Next, I'll show the basics for both the typed DataTable and DataRow. The purpose of these methods and properties is to utilize the typed DataRow to avoid casting in the code that requires the DataTable.

PersonTable Methods
Constructor
In the constructor, we can add the columns and constraints that define the table.

public class PersonTable : DataTable
{
public PersonTable()
{
Columns.Add(new DataColumn("LastName", typeof(string)));
Columns.Add(new DataColumn("FirstName", typeof(string)));
}
}
The above is a trivial example, which doesn't illustrate creating a primary key, setting constraints on the fields, and so forth.

Indexer
You can implement an indexer that returns the typed DataRow:

public PersonRow this[int idx]
{
get { return (PersonRow)Rows[idx]; }
}
The indexer is implemented on the typed DataTable because we can't override the indexer on the Rows property. Bounds checking can be left to the .NET framework's Rows property. The typical usage for a non-typed DataRow would look like this:

DataRow row=someTable.Rows[n];
whereas the indexer for the type DataRow would look like this:

PersonRow row=personTable[n];
Not ideal, as it looks like I'm indexing an array of tables. An alternative would be to implement a property perhaps named PersonRows, however this would require implementing a PersonRowsCollection and copying the Rows collection to the typed collection, which would most likely be a significant performance hit every time we index the Rows collection. This is even less ideal!

Add
The Add method should accept the typed DataRow. This protects us from adding a row to a different table. If you try to do that with a non-typed DataTable, you get an error at runtime. The advantage of a typed Add method is that you will get a compiler error, rather than a runtime error.

public void Add(PersonRow row)
{
Rows.Add(row);
}
Remove
A typed Remove method has the same advantages of the typed Add method above:

public void Remove(PersonRow row)
{
Rows.Remove(row);
}
GetNewRow
Here we end up with a conflict if we try to use the DataTable.NewRow() method, because the only thing different is the return type, not method signature (parameters). So, we could write:

public new PersonRow NewRow()
{
PersonRow row = (PersonRow)NewRow();

return row;
}
However, I am personally against using the "new" keyword to override the behavior of a base class. So prefer a different method name all together:

public PersonRow GetNewRow()
{
PersonRow row = (PersonRow)NewRow();

return row;
}
PersonRow Properties
The typed DataRow should include properties for the columns defined in the PersonTable constructor:

public string LastName
{
get {return (string)base["LastName"];}
set {base["LastName"]=value;}
}

public string FirstName
{
get {return (string)base["FirstName"];}
set {base["FirstName"]=value;}
}
The advantage here is that we have property names (any typo results in a compiler error), we can utilize Intellisense, and we can convert the object type here instead of in the application. Furthermore, we could add validation and property changed events if we wanted to. This might also be a good place to deal with DBNull to/from null conversions, and if we use nullable types, we can add further intelligence to the property getters/setters.

PersonRow Constructor
You may want to initialize the fields in the constructor:

public class PersonRow : DataRow
{
internal PersonRow(DataRowBuilder builder) : base(builder)
{
LastName=String.Empty;
FirstName=String.Empty;
}
}
Row Events
If necessary, you may want to implement typed row events. The typical row events are:

ColumnChanged
ColumnChanging
RowChanged
RowChanging
RowDeleted
RowDeleting
I'll look at one of these events, RowChanged, to illustrate a typed event.

Defining The Delegate
First, we need a delegate of the appropriate type:

public delegate void PersonRowChangedDlgt(PersonTable sender, PersonRowChangedEventArgs args);
Note that this delegate defines typed parameters.

The Event
We can now add the event to the PersonTable class:

public event PersonRowChangedDlgt PersonRowChanged;
Defining The Event Argument Class
We also need a typed event argument class because we want to use our typed PersonRow:

public class PersonRowChangedEventArgs
{
protected DataRowAction action;
protected PersonRow row;

public DataRowAction Action
{
get { return action; }
}

public PersonRow Row
{
get { return row; }
}

public PersonRowChangedEventArgs(DataRowAction action, PersonRow row)
{
this.action = action;
this.row = row;
}
}
Overriding The OnRowChanged Method
Rather than add a RowChanged event handler, we can override the OnRowChanged method and create a similar pattern for the new method OnPersonRowChanged. Note that we still call the base DataTable implementation for RowChanged. These methods are added to the PersonDataTable class.

protected override void OnRowChanged(DataRowChangeEventArgs e)
{
base.OnRowChanged(e);
PersonRowChangedEventArgs args = new PersonRowChangedEventArgs(e.Action, (PersonRow)e.Row);
OnPersonRowChanged(args);
}

protected virtual void OnPersonRowChanged(PersonRowChangedEventArgs args)
{
if (PersonRowChanged != null)
{
PersonRowChanged(this, args);
}
}
Note that the above method is virtual, as this is the pattern for how events are raised in the .NET framework, and it's good to be consistent with this pattern.

Now, that's a lot of work to add just one typed event, so you can see that having a code generator would be really helpful.

Using The Event
Here's a silly example to illustrate using the typed DataTable and the event:

class Program
{
static void Main(string[] args)
{
PersonTable table = new PersonTable();
table.PersonRowChanged += new PersonRowChangedDlgt(OnPersonRowChanged);
PersonRow row = table.GetNewRow();
table.Add(row);
}

static void OnPersonRowChanged(PersonTable sender, PersonRowChangedEventArgs args)
{
// This is silly example only for the purposes of illustrating using typed events.
// Do not do this in real applications, because you would never use this Changed event
// to validate row fields!
if (args.Row.LastName != String.Empty)
{
throw new ApplicationException("The row did not initialize to an empty string for the LastName field.");
}
}
}

This however illustrates the beauty of a typed DataTable and typed DataRow: readability and compiler checking of proper usage.

Conclusion
Hopefully this article clearly illustrates how to create a typed DataTable manually. The "discovery" that I made (that I couldn't find anywhere else on the Internet) is that, when you override GetRowType(), you also need to override NewRowFromBuilder().

dotnet(.Net) Project Source code Downloads and Tutorials

Email Subscrption



Enter your email address:

Delivered by FeedBurner

Feedburner Count

Unique Visitor

Design by araba-cı | MoneyGenerator Blogger Template by GosuBlogger