All Differences in Dot Net
All Differences in Dot Net
An abstract class is a special kind of class that cannot be instantiated. So the question
is why we need a class that cannot be instantiated? An abstract class is only to be sub-
classed (inherited from). In other words, it only allows other classes to inherit from it but
cannot be instantiated. The advantage is that it enforces certain hierarchies for all the
subclasses. In simple words, it is a kind of contract that forces all the subclasses to
carry on the same hierarchies or standards.
What is an Interface?
Both Together
When we create an interface, we are basically creating a set of methods without any
implementation that must be overridden by the implemented classes. The advantage is
that it provides a way for a class to be a part of two classes: one from inheritance
hierarchy and one from the interface.
When we create an abstract class, we are creating a base class that might have one or
more completed methods but at least one or more methods are left uncompleted and
declared abstract. If all the methods of an abstract class are uncompleted then it is
same as an interface. The purpose of an abstract class is to provide a base class
definition for how a set of derived classes will work and then allow the programmers to
fill the implementation in the derived classes.
There are some similarities and differences between an interface and an abstract class
that I have arranged in a table for easier comparison:
What are the basic differences between user controls and custom controls?
Now that you have a basic idea of what user controls and custom controls are
and how to create them, let's take a quick look at the differences between
the two.
Factors User control Custom control
Deployment Designed for single-application scenarios
Deployed in the source form (.ascx) along with the source code of the
application
Performance Fastest, but the When storing data of When storing data of
more session basic types (e.g. basic types (e.g.
data, the more string, integer, etc), in string, integer, etc), in
memory is one test environment it's one test environment it's
consumed on 15% 25%
the web server, slower than InProc. slower than InProc.
and that can However, the cost of Same warning about
affect serialization/deserializati serialization as in
Performance. on can affect StateServer.
performance if
you're storing lots
of objects. You have
to do performance
testing for
your own scenario.
ArrayList:
==============================
The ArrayList is a collection class that models a dynamic array, whose size increases
as required when new objects are added to the array.
Namespace : System.Collections
Implementing Interfaces : ICollection, IList, ICloneable, IConvertible
1.Array List is a List
2.In this we can only add items to the list
3.Here we Can Add any datatype value,Every item in arraylist is treated as object
Characteristics:
==============================
i) To store dynamic data use the arraylist class. So it is compact memory usage.
ii) The search for an item in an arraylist is performed sequentially. So it is slow.
iii)Type of Access is indexed.
HashTable:
==============================
A HashTable is a collection of key-value pairs implemented using a hash table
algorithm.
Namespace : System.Collections
Implementing Interfaces : ICollection, IDictionary
1.Hash Table is a map
2.Here we can add data with the key
3.Retrieving by key in Hashtable is faster than retrieving in Arraylist
hash table we can store different type object like a structure but hastable follows two
fields, one is hash key another one is value
Characteristics:
==============================
i) Search is very fast.
ii) HashTables are big and fast.
iii)High Memory usage.
iv) Type of Access is using the hash of a key value.
1) Hash table store data as name,value pair. while in array only value is store.
2) to access value from hash table, you need to pass name. while in array, to access
value , you need to pass index number.
3) you can store different type of data in hash table, say int,string etc. while in array you
can store only similar type of data.
Array - represents an old-school memory array - kind of like a alias for a normal type[]
array. Can enumerate. Can't grow automatically. I would assume very fast insertion and
retriv. speed.
ArrayList - automatically growing array. Adds more overhead. Can enum., probably
slower than a normal array but still pretty fast. These are used a lot in .NET
List - one of my favs - can be used with generics, so you can have a strongly typed
array, e.g. List<string>. Other than that, acts very much like ArrayList.
Hashtable - plain old hashtable. O(1) to O(n) worst case. Can enumerate the value and
keys properties, and do key/val pairs.
Dictionary - same as above only strongly typed via generics, such as Dictionary<string,
string>
SortedList - a sorted generic list. Slowed on insertion since it has to figure out where to
put things. Can enum., probably the same on retrieval since it doesn't have to resort, but
deletion will be slower than a plain old list.
I tend to use List and Dictionary all the time - once you start using them strongly typed
with generics, its really hard to go back to the standard non-generic ones.
There are lots of other data structures too - there's KeyValuePair which you can use to
do some interesting things, there's a SortedDictionary which can be useful as well.
ARRAY ARRAYLIST
1. Char[] vowel=new Char[]; ArrayList a_list=new ArrayList();
2. Array is in the System ArrayList is in the System.Collections
namespace.
namespace
3. The capacity of an Array is fixed ArrayList can increase and decrease size
dynamically
4. An Array is a collection of similar ArrayList can hold item of different types
items
5. An Array can have multiple ArrayList always has exactly one dimension
dimensions
1) An array can be of any data type and contain one data type while array list can
contain any data type in the form of the object.
2) With array you cannot dynamically increase or decrease the size of array
dynamically. You must the define the size of the array. You can change the size of
the array with redim statement but still you have to define type. While with array list
you can make list of any sizes. When a element or object is added to array list it size
is automatically increase the capacity of the array list.
3) once you delete the item in array it kept that one as empty like a[2]="" but in case
of arraylist a[3]index occupies the position of a[2] and also if you try to insert a[2]
value array will throw error if any items reside inside but in case of arraylist a[2] is
inserted in same position but at the same time position of a[2] become a[3] if there is
already an item exists
value type contain some integral types means signed integers and unsigned integer.
floating-point,decimal types are also comes under value type
Reference Types:-
it is user defined type. It contains classes, interfaces, delegates, arrays also object and
strings types.
Finalize : Is a destructor, called by Garbage Collector when the object goes out of
scope. Implement it when you have unmanaged resources in your code, and want to
make sure that these resources are freed when the Garbage collection happens.
.NET Garbage collector does almost all clean up activity for your objects. But
unmanaged resources ( ex: Windows API created objects, File, Database connection
objects, COM objects etc) is outside the scope of .NET framework we have to explicitly
clean our resources. For these types of objects .NET framework provides Objects.
Finalize method which can be overridden and clean up code for unmanaged resources
can be put in this section.
1>CLR uses the Dispose and Finalize methods for performing garbage collection of
runtime objects of .Net applications.
2>Clr has a Garbage Collector(GC) which periodically checks for unused and
unreferenced objects in Heap.It call Finalize() method to free the memory used by such
objects.
4>Dispose method can be invoked only by the classes that IDisposable interface.
data structures are classes that are used to organize data and provide various
operations upon their data.
The best way to compare data structures is in how the data structures performance
changes as the amount of data stored increases.
This sort of analysis described here is called asymptotic analysis, as it examines how
the efficiency of a data structure changes as the data structure’s size approaches
infinity. The notation commonly used in asymptotic analysis is called big-Oh notation.
The big-Oh notation to describe the performance of searching an unsorted array would
be denoted as O(n). The large script O is where the terminology big-Oh notation comes
from, and the n indicates that the number of steps required to search an array grows
linearly as the size of the array grows.
Arrays are one of the simplest and most widely used data structures in computer
programs. Arrays in any programming language all share a few common properties:
• Allocation
• Accessing
This allocates a contiguous block of memory in the CLR-managed heap large enough to
hold the allocationSize number of arrayTypes. If arrayType is a value type, then
allocationSize number of unboxed arrayType values are created. If arrayType is a
reference type, then allocationSize number of arrayType references are created.
(unfamiliar with the difference between reference and value types and the managed
heap versus the stack?)
The following is an example that highlights these points:
All arrays in .NET allow their elements to both be read and written to. The syntax for
accessing an array element is:
When working with an array, you might need to change the number of elements it holds.
To do so, you’ll need to create a new array instance of the specified size and copy the
contents of the old array into the new, resized array
Do not use arrays when your application will store large arrays that are searched
frequently.
A flexible data structure design is where the data structure maintains an internal array of
object instances. Because all types in the .NET Framework are derived from the
object type, the data structure could store any type.
The below picture shows how an arraylist behaves with the heap/stack and
boxing/unboxing
The ArrayList provides added flexibility over the standard array, this flexibility comes at
the cost of performance. Because the ArrayList stores an array of objects, when reading
the value from an ArrayList you need to explicitly cast it to the data type being stored in
the specified location.
Informative: Boxing / Un-boxing – this brief intermission I will discuss what boxing / un-
boxing is
In the above example, it is shown how an int value can be converted to an object and
back again to an int. This example shows both boxing and un-boxing. When a variable
of a value type needs to be converted to a reference type, an object box is allocated to
hold the value and the value is copied into the box.
Un-boxing is just the opposite. When an object box is cast back to its original value
type, the value is copied out of the box and into the appropriate storage location.
Value type objects have two representations: an unboxed form and a boxed form.
Reference types are always in a boxed form.
typing and performance issues associated with the ArrayList have been remedied in the
.NET Framework 2.0. Generics allow for a developer creating a data structure to defer
type selection.
Now using the sample:
The main advantages of Generics include:
• Type-safety: a developer using the TypeSafeList class can only add elements
that are of the type or are derived from the type specified. For example, trying to
add a string to the fib TypeSafeList in the example above would result in a
compile-time error.
• Performance: Generics remove the need to type check at run-time, and
eliminate the cost associated with boxing and unboxing.
• Reusability: Generics break the tight-coupling between a data structure and the
application for which it was created. This provides a higher degree of reuse for
data structures.
An array is a pain to maintain, especially if you don’t know the initial dimension and
sizing. Imagine a wrapper above the array that managed this nuisance for you. .net2.0
introduces such a wrapper, LIST which can be found in the namespace
System.Collections.Generics.
The List class contains an internal array and exposes methods and properties that,
among other things, allow read and write access to the elements of the internal array. It
is a homogeneous data structure, utilizes Generics.
Conclusion:
Like the array, the List is a collection of homogeneous data items. With a List , you don’t
need to worry about resizing or capacity limits, and there are numerous List methods for
searching, sorting, and modifying the List’s data. The List class uses Generics to
provide a type-safe, reusable collection data structure.
Server.Transer:
----------------------
1: Url will not changed
2:Page must be in same server
3:Avoid server round trip.
4: Page extension must be .aspx (We can’t accsss non aspx pages)
Response.Redirect
-------------------
1: Page URL will change
2: We can connect resource of different server
3: Round Trip required. (Means when we request for a page server generate one more
request page is found in different location)
4: We can access non aspx page also (Ex .htm, .asp, .aspx etc).
1. Should return atleast one output parameter.Can return more than one parameter
using OUT argument.
Procedure:
Before SQL 2000, User Defined Functions (UDFs), were not available. Stored
Procedures were often used in their place. When advantages or disadvantages of User
Defined Functions are discussed, the comparison is usually to Stored Procedures.
One of the advantages of User Defined Functions over Stored Procedures, is the fact
that a UDF can be used in a Select, Where, or Case statement. They also can be used
to create joins. In addition, User Defined Functions are simpler to invoke than Stored
Procedures from inside another SQL statement.
User Defined Functions cannot be used to modify base table information. The DML
statements INSERT, UPDATE, and DELETE cannot be used on base tables. Another
disadvantage is that SQL functions that return non-deterministic values are not allowed
to be called from inside User Defined Functions. GETDATE is an example of a non-
deterministic function. Every time the function is called, a different value is returned.
Therefore, GETDATE cannot be called from inside a UDF you create.
There are three different types of User Defined Functions. Each type refers to the data
being returned by the function. Scalar functions return a single value. In Line Table
functions return a single table variable that was created by a select statement. The final
UDF is a Multi-statement Table Function. This function returns a table variable whose
structure was created by hand, similar to a Create Table statement. It is useful when
complex data manipulation inside the function is required.
Scalar UDFs
Our first User Defined Function will accept a date time, and return only the date portion.
Scalar functions return a value. From inside Query Analyzer, enter:
Notice the User Defined Function must be prefaced with the owner name, DBO in this
case. In addition, GETDATE can be used as the input parameter, but could not be used
inside the function itself. Other built in SQL functions that cannot be used inside a User
Defined Function include: RAND, NEWID, @@CONNCECTIONS, @@TIMETICKS,
and @@PACK_SENT. Any built in function that is non-deterministic.
The statement begins by supplying a function name and input parameter list. In this
case, a date time value will be passed in. The next line defines the type of data the UDF
will return. Between the BEGIN and END block is the statement code. Declaring the
output variable was for clarity only. This function should be shortened to:
These User Defined Functions return a table variable that was created by a single
select statement. Almost like a simply constructed non-updatable view, but having the
benefit of accepting input parameters.
This next function looks all the employees in the pubs database that start with a letter
that is passed in as a parameter. In Query Analyzer, enter and run:
USE pubs
GO
All the rows having a first name starting with A were returned. The return is a Table
Variable, not to be confused with a temporary table. Table variables are new in SQL
2000. They are a special data type whose scope is limited to the process that declared
it. Table variables are stated to have performance benefits over temporary tables. None
of my personal testing has found this result though.
Multi Statement User Defined Functions are very similar to Stored Procedures. They
both allow complex logic to take place inside the function. There are a number of
restrictions unique to functions though. The Multi Statement UDF will always return a
table variable–and only one table variable. There is no way to return multiple result sets.
In addition, a User Defined Function cannot call a Stored Procedure from inside itself.
They also cannot execute dynamic SQL. Remember also, that UDFs cannot use non-
deterministic built in functions. So GETDATE and RAND cannot be used. Error handling
is restricted. RAISERROR and @@ERROR are invalid from inside User Defined
Functions. Like other programming languages, the purpose of a User Defined Function
is to create a stand-alone code module to be reused over and over by the global
application.
For a Multi Statement test, we will create a modified version of the LookByFName
function. This new function will accept the same input parameter. But rather than return
a table from a simple select, a specific table will be created, and data in it will be
manipulated prior to the return:
UPDATE @Result
SET on_probation = 'N'
UPDATE @Result
SET on_probation = 'Y'
WHERE hire_date < '01/01/1991'
RETURN
END
With the new Multi Statement Function, we can manipulate data like a Stored
Procedure, but use it in statement areas like a View.
Conclusion
User Defined Functions offer an excellent way to work with code snippets. The main
requirement is that the function be self-contained. Not being able to use non-
deterministic built in functions is a problem, but if it can be worked around, UDFs will
provide you with a programming plus.
Answer:
Let me explain this through code.
using System;
using System.Data;
using System.Text;
using System.Windows.Forms;
namespace BaseDerive
public Form1()
InitializeComponent();
b.func1();
d.func1();
bd.func1();
bd2.func2();
{
MessageBox.Show("Base Class function 1.");
{
MessageBox.Show("Derieve Class fuction 2 used override keyword");
This is a window application so all the code for calling the function through objects is
written in Form_Load event.
As seen in above code, I have declared 2 classes. One works as a Base class and
second is a derieve class derived from base class.
If we create object like above notation and make a call to any function which exists in
base class and derive class both, then it will always make a call to function of base
class. If we have overidden the method in derive class then it wlll call the derive class
function.
For example…
objB.func1(); //Calls the base class function. (In case of new keyword)
Note:
// This will throw a compile time error. (Casting is required.)
DeriveClass objB = new BaseClass();
I find this table from MSDN to be useful to explain differences between shadowing and
overriding: The main constraint on overriding is that it needs permission from the base
class with the 'overridable' keyword. Shadowing does not require permission of base
class.
class A
{
public void foo()
{
Console.WriteLine("A::foo()");
}
public virtual void bar()
{
Console.WriteLine("A::bar()");
}
}
class B : A
{
public new void foo()
{
Console.WriteLine("B::foo()");
}
public override void bar()
{
Console.WriteLine("B::bar()");
}
}
class Program
{
static int Main(string[] args)
{
B b = new B();
A a = b;
a.foo(); // Prints A::foo
b.foo(); // Prints B::foo
a.bar(); // Prints B::bar
b.bar(); // Prints B::bar
return 0;
}
}
virtual / override tells the compiler that the two methods are related and that in some
circumstances when you would think you are calling the first (virtual) method it's actually
correct to call the second (overridden) method instead. This is the foundation of
polymorphism.
new tells the compiler that you are adding a method to a derived class with the same
name as a method in the base class, but they have no relationship to each other.
(new SubClass()).NewBar()
using System;
using System.Collections.Generic;
using System.Text;
namespace ConsoleApplication2
{
public class ChildOne
{
public virtual int Add()
{
return 1;
}
}
public class ChildTwo : ChildOne
{
public override int Add()
{
return 2;
}
}
public class ChildOneNew
{
public int Add()
{
return 3;
}
}
public class ChildTwoNew : ChildOneNew
{
public new int Add()
{
return 4;
}
}
public class MainClass
{
public static void Main()
{
}
}
}
Virtual keyword makes the method on the base type prone to any derived ones override
it by writing override keyword before it
So when calling it from the parent object, parent object looks for if any overrides exists
to implement it and eliminates the current virtual method code
But the new keyword make a new version from the method actually it’s a new one at all;
So when calling it on the parent object, although it was initialized by a child one, but
now this method (public new void SomeOtherMethod()) is another one totally so the
parent object implements its own method