Friday 22 January 2016

Standard Template Library

C++ Templates:-
-----------------------
Templates are a feature of the C++ programming language that allows functions and classes to operate with generic types. This allows a function or class to work on many different data types without being rewritten for each one.
What are generic functions or classes?
Many times while programming, there is a need for creating functions which perform the same operations but work with different data types. So C++ provides a feature to create a single generic function instead of many functions which can work with different data type by using the template parameter.

What is the template parameter?
The way we use normal parameters to pass as a value to function, in the same manner template parameters can be used to pass type as argument to function. Basically, it tells what type of data is being passed to the function.
The syntax for creating a generic function:
template <class  type> return-type function-name (parameter-list)
Here, ‘type’ is just a placeholder used to store the data type when this function is used you can use any other name instead class is used to specify the generic type of template, alternatively typename can be used instead of it.
Let’s try to understand it with an example:
Assume we have to swap two variables of int type and two of float type. Then, we will have to make two functions where one can swap int type variables and the other one can swap float type variables. But here if we use a generic function, then we can simply make one function and can swap both type of variables by passing their different type in the arguments. Let’s implement this:

#include <iostream>
using namespace std ;
// creating a generic function ‘swap (parameter-list)’ using template :
template <class X> 
void swap( X &a, X &b) {
    X tp;
    tp = a;
    a = b;
    b = tp;
    cout << " Swapped elements values of a and b are  " << a << " and  " << b << " respectively " << endl;
}

int main( ) {
    int a = 10, b = 20 ;
    float c = 10.5, d = 20.5 ;
            swap(a , b);          // function swapping ‘int’ elements 
    swap(c , d);                  // function swapping ‘float’ elements 
    return 0;
}
 
Output :
Swapped elements values of a and b are  20 and 10 respectively.
Swapped elements values of a and b are  20.5 and 10.5 respectively.
After creating the generic function, compiler will automatically generate correct code for the type of data used while executing the function.

C++ STL also has some containers (pre-build data structures) like vectors, iterators, pairs etc. These are all generic class which can be used to represent collection of any data type.

Iterator
An iterator is any object that, points to some element in a range of elements (such as an array or a container) and has the ability to iterate through those elements using a set of operators (with at least the increment (++) and dereference (*) operators).
A pointer is a form of an iterator. A pointer can point to elements in an array, and can iterate over them using the increment operator (++). There can be other types of iterators as well. For each container class, we can define iterator which can be used to iterate through all the elements of that container.
Example:
For Vector:
vector <int>::iterator  it;
For List:
list <int>::iterator it;
You will see the implementations using iterators in the topics explained below.

String
C++ provides a powerful alternative for the char*. It is not a built-in data type, but is a container class in the Standard Template Library. String class provides different string manipulation functions like concatenation, find, replace etc. Let us see how to construct a string type.
string s0;                                       // s0 = “”
string s1(“Hello”);                               // s1 = “Hello”
string s2 (s1);                                  // s2 = “Hello”
string s3 (s1, 1, 2);                            // s3 = “el”
string s4 ("Hello World", 5);                     // s4 = “Hello”
string s5 (5, ‘*’);                              // s5 = “*****”
string s6 (s1.begin(), s1.begin()+3);              // s6 = “Hel”
Here are some member functions:
append(): Inserts additional characters at the end of the string (can also be done using ‘+’ or ‘+=’ operator). Its time complexity is O(N) where N is the size of the new string.
assign(): Assigns new string by replacing the previous value (can also be done using ‘=’ operator).
at(): Returns the character at a particular position (can also be done using ‘[ ]’ operator). Its time complexity is O(1).
begin(): Returns an iterator pointing to the first character. Its time complexity is O(1).
clear(): Erases all the contents of the string and assign an empty string (“”) of length zero. Its time complexity is O(1).
compare(): Compares the value of the string with the string passed in the parameter and returns an integer accordingly. Its time complexity is O(N + M) where N is the size of the first string and M is the size of the second string.
copy(): Copies the substring of the string in the string passed as parameter and returns the number of characters copied. Its time complexity is O(N) where N is the size of the copied string.
c_str(): Convert the string into C-style string (null terminated string) and returns the pointer to the C-style string. Its time complexity is O(1).
empty(): Returns a boolean value, true if the string is empty and false if the string is not empty. Its time complexity is O(1).
end(): Returns an iterator pointing to a position which is next to the last character. Its time complexity is O(1).
erase(): Deletes a substring of the string. Its time complexity is O(N) where N is the size of the new string.
find(): Searches the string and returns the first occurrence of the parameter in the string. Its time complexity is O(N) where N is the size of the string.
insert(): Inserts additional characters into the string at a particular position. Its time complexity is O(N) where N is the size of the new string.
length(): Returns the length of the string. Its time complexity is O(1).
replace(): Replaces the particular portion of the string. Its time complexity is O(N) where N is size of the new string.
resize(): Resize the string to the new length which can be less than or greater than the current length. Its time complexity is O(N) where N is the size of the new string.
size(): Returns the length of the string. Its time complexity is O(1).
substr(): Returns a string which is the copy of the substring. Its time complexity is O(N) where N is the size of the substring.
Implementation:
#include <iostream>
#include <cstdio>

using namespace std;

int main()
{
    string s, s1;
    s = "HELLO";
    s1 = "HELLO";
    if(s.compare(s1) == 0)
        cout << s << " is equal to " << s1 << endl;
    else
        cout << s << " is not equal to " << s1 << endl;
    s.append(" WORLD!");
    cout << s << endl;
    printf("%s\n", s.c_str());
    if(s.compare(s1) == 0)
        cout << s << " is equal to " << s1 << endl;
    else
        cout << s << " is not equal to " << s1 << endl;
    return 0;
}
Output:
HELLO is equal to HELLO
HELLO WORLD!
HELLO WORLD!
HELLO WORLD! is not equal to HELLO
 
Vector:-
Vectors are sequence containers that have dynamic size. In other words, vectors are dynamic arrays. Just like arrays, vector elements are placed in contiguous storage location so they can be accessed and traversed using iterators. To traverse the vector we need the position of the first and last element in the vector which we can get through begin() and end() or we can use indexing from 0 to size(). Let us see how to construct a vector.
vector<int> a;                                       // empty vector of ints
vector<int> b (5, 10);                                // five ints with value 10
vector<int> c (b.begin(),b.end());                     // iterating through second
vector<int> d (c);                                   // copy of c
Some of the member functions of vectors are:
at(): Returns the reference to the element at a particular position (can also be done using ‘[ ]’ operator). Its time complexity is O(1).
back(): Returns the reference to the last element. Its time complexity is O(1).
begin(): Returns an iterator pointing to the first element of the vector. Its time complexity is O(1).
clear(): Deletes all the elements from the vector and assign an empty vector. Its time complexity is O(N) where N is the size of the vector.
empty(): Returns a boolean value, true if the vector is empty and false if the vector is not empty. Its time complexity is O(1).
end(): Returns an iterator pointing to a position which is next to the last element of the vector. Its time complexity is O(1).
erase(): Deletes a single element or a range of elements. Its time complexity is O(N + M) where N is the number of the elements erased and M is the number of the elements moved.
front(): Returns the reference to the first element. Its time complexity is O(1).
insert(): Inserts new elements into the vector at a particular position. ts time complexity is O(N + M) where N is the number of elements inserted and M is the number of the elements moved .
pop_back(): Removes the last element from the vector. Its time complexity is O(1).
push_back(): Inserts a new element at the end of the vector. Its time complexity is O(1).
resize(): Resizes the vector to the new length which can be less than or greater than the current length. Its time complexity is O(N) where N is the size of the resized vector.
size(): Returns the number of elements in the vector. Its time complexity is O(1).

Traverse:
void traverse(vector<int> v)
{
    vector <int>::iterator it;
    for(it = v.begin();it != v.end();++it)
        cout << *it <<   ‘;
    cout << endl;
    for(int i = 0;i < v.size();++i)
        cout << v[i] <<  ‘;
    cout << endl;
 }

Implementation:
#include <iostream>
#include <vector>

using namespace std;

int main()
{
    vector <int> v;
    vector <int>::iterator it;
    v.push_back(5);
    while(v.back() > 0)
        v.push_back(v.back() - 1);
    for(it = v.begin(); it != v.end();++it)
        cout << *it << ' ';
    cout << endl;
    for(int i = 0;i < v.size();++i)
        cout << v.at(i) << ' ';
    cout << endl;
    while(!v.empty())
    {
        cout << v.back() << ' ';
        v.pop_back();
    }
    cout << endl;
    return 0;
}
Output:
5 4 3 2 1 0
5 4 3 2 1 0
0 1 2 3 4 5
 
Question 1:- So the question is where memory gets created when we create a vector.
--Vector used defaults allocator, that uses 'new' to create a memory, so by default
memory gets created in heap.
Its all about whats inside allocator, if you want to create memory in stack use 
customize allocator, allocate memory without using 'new'.

Question 2:- How its maintain its dynamic array characterstics.
Suppose we have created a vector of size 10 and now i want to store 20 more elements,
in that case it , copies 10 elements temprorily and erase all 10 memory already created,
Now it will allocate memory for capacity 30 or more and store temprory saved values
and then next 20 elements. 



List:-
----------
List is a sequence container which takes constant time in inserting and removing elements. List in STL is implemented as Doubly Link List.
The elements from List cannot be directly accessed. For example to access element of a particular position ,you have to iterate from a known position to that particular position.

//declaration
list <int> LI;
Here LI can store elements of int type.
// here LI will have 5 int elements of value 100.
list<int> LI(5, 100)
Some of the member function of List:
begin( ): It returns an iterator pointing to the first element in list.Its time complexity is O(1).
end( ): It returns an iterator referring to the theoretical element(doesn’t point to an element) which follows the last element.Its time complexity is O(1).
empty( ): It returns whether the list is empty or not.It returns 1 if the list is empty otherwise returns 0.Its time complexity is O(1).
assign( ): It assigns new elements to the list by replacing its current elements and change its size accordingly.It time complexity is O(N).
back( ): It returns reference to the last element in the list.Its time complexity is O(1).
erase( ): It removes a single element or the range of element from the list.Its time complexity is O(N).
front( ): It returns reference to the first element in the list.Its time complexity is O(1).
push_back( ): It adds a new element at the end of the list, after its current last element. Its time complexity is O(1).
push_front( ): It adds a new element at the beginning of the list, before its current first element. Its time complexity is O(1).
remove( ): It removes all the elements from the list, which are equal to given element. Its time complexity is O(N).
pop_back( ): It removes the last element of the list, thus reducing its size by 1. Its time complexity is O(1).
pop_front( ): It removes the first element of the list, thus reducing its size by 1. Its time complexity is O(1).
insert( ): It insert new elements in the list before the element on the specified position. Its time complexity is O(N).
reverse ( ): It reverses the order of elements in the list. Its time complexity is O(N).
size( ): It returns the number of elements in the list. Its time complexity is O(1).

Implementation:
#include <iostream>
#include <list>
using namespace std;
int main()
{
    list <int> LI;
    list <int>::iterator it;
    //inserts elements at end of list
    LI.push_back(4);
    LI.push_back(5);

    //inserts elements at beginning of list
    LI.push_front(3);
    LI.push_front(5);

    //returns reference to first element of list
    it = LI.begin();

    //inserts 1 before first element of list
    LI.insert(it,1);

    cout<<"All elements of List LI are: " <<endl;
    for(it = LI.begin();it!=LI.end();it++)
    {
        cout<<*it<<" ";
    }
    cout<<endl;

    //reverse elements of list
    LI.reverse();

    cout<<"All elements of List LI are after reversing: " <<endl;
    for(it = LI.begin();it!=LI.end();it++)
    {
         cout<<*it<<" ";
    }
    cout<<endl;

    //removes all occurences of 5 from list
    LI.remove(5);

    cout<<"Elements after removing all occurence of 5 from List"<<endl;
    for(it = LI.begin();it!=LI.end();it++)
    {
         cout<<*it<<" ";
    }
    cout<<endl;

    //removes last element from list
    LI.pop_back();
    //removes first element from list
    LI.pop_front();
    return 0;
}
Output:
All elements of List LI are: 
1 5 3 4 5 
All elements of List LI are after reversing: 
5 4 3 5 1 
Elements after removing all occurrence of 5  from List
4 3 1
 
Pair:-
--------
Pair is a container that can be used to bind together a two values which may be of different types. Pair provides a way to store two heterogeneous objects as a single unit.
pair <int, char> p1;                    // default
pair <int, char> p2 (1, a’);            // value inititialization
pair <int, char> p3 (p2);               // copy of p2
We can also initialize a pair using make_pair() function. make_pair(x, y) will return a pair with first element set to x and second element set to y.
p1 = make_pair(2, b’);
To access the elements we use keywords, first and second to access the first and second element respectively.
cout << p2.first <<   << p2.second << endl;
 
Implementation:
#include <iostream>
#include <utility>

using namespace std;

int main()
{
    pair <int, char> p;
    pair <int, char> p1(2, 'b');
    p = make_pair(1, 'a');
    cout << p.first << ' ' <<  p.second << endl;
    cout << p1.first << ' ' << p1.second << endl;
    return 0;
}
Output:
1 a
2 b
 
 
Sets:-
-------
Data structure used: Ordered balanced BST.

Sets are containers which store only unique values and permit easy look ups. The values in the sets are stored in some specific order (like ascending or descending). Elements can only be inserted or deleted, but cannot be modified. We can access and traverse set elements using iterators just like vectors.
set<int> s1;                               // Empty Set
int a[]= {1, 2, 3, 4, 5, 5};
set<int> s2 (a, a + 6);                    // s2 = {1, 2, 3, 4, 5}
set<int> s3 (s2);                          // Copy of s2
set<int> s4 (s3.begin(), s3.end());         // Set created using iterators
Some of the member functions of set are:
begin(): Returns an iterator to the first element of the set. Its time complexity is O(1).
clear(): Deletes all the elements in the set and the set will be empty. Its time complexity is O(N) where N is the size of the set.
count(): Returns 1 or 0 if the element is in the set or not respectively. Its time complexity is O(logN) where N is the size of the set.
empty(): Returns true if the set is empty and false if the set has at least one element. Its time complexity is O(1).
end(): Returns an iterator pointing to a position which is next to the last element. Its time complexity is O(1).
erase(): Deletes a particular element or a range of elements from the set. Its time complexity is O(N) where N is the number of element deleted.
find(): Searches for a particular element and returns the iterator pointing to the element if the element is found otherwise it will return the iterator returned by end(). Its time complexity is O(logN) where N is the size of the set.
insert(): insert a new element. Its time complexity is O(logN) where N is the size of the set.
size(): Returns the size of the set or the number of elements in the set. Its time complexity is O(1).
Traverse:
void traverse(set<int> s)
{
    set <int>::iterator it;
    for(it = s.begin();it != s.end();++it)
        cout << *it <<   ‘;
    cout << endl;
 }
Implementation:
#include <iostream>
#include <set>

using namespace std;

int main()
{
    set <int> s;
    set <int>::iterator it;
    int A[] = {3, 5, 2, 1, 5, 4};
    for(int i = 0;i < 6;++i)
        s.insert(A[i]);
    for(it = s.begin();it != s.end();++it)
        cout << *it << ' ';
    cout << endl;
    return 0;
}
Output:
1 2 3 4 5
 
 
Maps:-
---------
Data Structure used:- pair stored in ordered balanced BST.

Maps are containers which store elements by mapping their value against a particular key. It stores the combination of key value and mapped value following a specific order. Here key value are used to uniquely identify the elements mapped to it. The data type of key value and mapped value can be different. Elements in map are always in sorted order by their corresponding key and can be accessed directly by their key using bracket operator ([ ]).
In map, key and mapped value have a pair type combination,i.e both key and mapped value can be accessed using pair type functionalities with the help of iterators.
//declaration. Here key values are of char type and mapped values(value of element) is of int type.
map <char ,int > mp;

mp[‘b’]  = 1;
It will map value 1 with key ‘b’. We can directly access 1 by using mp[ ‘b’ ].
mp[‘a’] = 2;
It will map value 2 with key ‘a’.
In map mp , the values be will be in sorted order according to the key.
Note: N is the number of elements in map.
Some Member Functions of map:
at( ): Returns a reference to the mapped value of the element identified with key.Its time complexity is O(logN).
count( ): searches the map for the elements mapped by the given key and returns the number of matches.As map stores each element with unique key, then it will return 1 if match if found otherwise return 0.Its time complexity is O(logN).
clear( ): clears the map, by removing all the elements from the map and leaving it with its size 0.Its time complexity is O(N).
begin( ): returns an iterator(explained above) referring to the first element of map.Its time complexity is O(1).
end( ): returns an iterator referring to the theoretical element(doesn’t point to an element) which follows the last element.Its time complexity is O(1).
empty( ): checks whether the map is empty or not. It doesn’t modify the map.It returns 1 if the map is empty otherwise returns 0.Its time complexity is O(1).
erase( ): removes a single element or the range of element from the map.
find( ): Searches the map for the element with the given key, and returns an iterator to it, if it is present in the map otherwise it returns an iterator to the theoretical element which follows the last element of map.Its time complexity is O(logN).
insert( ): insert a single element or the range of element in the map.Its time complexity is O(logN), when only element is inserted and O(1) when position is also given.
Implementation:
#include <iostream>
#include <map>
using namespace std;
int main(){
    map <char,int> mp;
    map <char,int> mymap,mymap1;

    //insert elements individually in map with the combination of key value and value of element
    mp.insert(pair<char,int>('a',2));   //key is 'c' and 2 is value.
    mp.insert(pair<char,int>('b',1));
    mp.insert(pair<char,int>('c',43));

    //inserts elements in range using insert() function in map 'mymap'.
    mymap.insert(mp.begin(),mp.end());

    //declaring iterator for map
    map <char,int>::iterator it;

    //using find() function to return reference of element mapped by key 'b'.
    it = mp.find('b');

    //prints key and element's value.
    cout<<"Key and element's value of map are: ";
    cout<<it->first<<" and "<<it->second<<endl;

    //alternative way to insert elements by mapping with their keys.
    mymap1['x'] = 23;
    mymap1['y'] = 21;

    cout<<"Printing element mapped by key 'b' using at() function : "<<mp.at('b')<<endl;

    //swap contents of 2 maps namely mymap and mymap1.
    mymap.swap(mymap1);

    /* prints swapped elements of mymap and mymap1 by iterating all the elements through    
        using   iterator. */
    cout<<"Swapped elements and their keys of mymap are: "<<endl;
    for(it=mymap.begin();it!=mymap.end();it++)
    {
    cout<<it->first<<" "<<it->second<<endl;
    }
    cout<<"Swapped elements and their keys of mymap1 are: "<<endl;
    for(it=mymap1.begin();it!=mymap1.end();it++)
    {
    cout<<it->first<<" "<<it->second<<endl;
    }
    //erases element mapped at 'c'.
    mymap1.erase('c');

    //prints all elements of mymap after erasing element at 'c'.

    cout<<"Elements of mymap1 after erasing element at key 'c' : "<<endl;
    for(it=mymap1.begin();it!=mymap1.end();it++)
    {
    cout<<it->first<<" "<<it->second<<endl;
    }

    //erases elements in range from mymap1
    mymap1.erase(mymap1.begin(),mymap1.end());

    cout<<"As mymap1 is empty so empty() function will return 1 : " << mymap1.empty()<<endl;

    //number of elements with key = 'a' in map mp.
    cout<<"Number of elements with key = 'a' in map mp are : "<<mp.count('a')<<endl;

    //if mp is empty then itmp.empty will return 1 else 0.
    if(mp.empty())
    {
        cout<<"Map is empty"<<endl;
    }
    else
    {
        cout<<"Map is not empty"<<endl;
    }

    return 0;
}
Output:
Key and element's value of map are: b and 1
Printing element mapped by key 'b' using at() function : 1
Swapped elements and their keys of mymap are: 
x 23
y 21
Swapped elements and their keys of mymap1 are: 
a 2
b 1
c 43
Elements of mymap1 after erasing element at key 'c' : 
a 2
b 1
As mymap1 is empty so empty() function will return 1 : 1
Number of elements with key = 'a' in map mp are : 1
Map is not empty
 
Stacks:
Stack is a container which follows the LIFO (Last In First Out) order and the elements are inserted and deleted from one end of the container. The element which is inserted last will be extracted first.
Declaration:
stack <int> s;
Some of the member functions of Stack are:
push( ): Insert element at the top of stack. Its time complexity is O(1).
pop( ): removes element from top of stack. Its time complexity is O(1).
top( ): access the top element of stack. Its time complexity is O(1).
empty( ): checks if the stack is empty or not. Its time complexity is O(1).
size( ): returns the size of stack. Its time complexity is O(1).
Implementation:
#include <iostream>
#include <stack>

using namespace std;
int main( )
{
    stack <int> s;  // declaration of stack

    //inserting 5 elements in stack from 0 to 4.
    for(int i = 0;i < 5; i++)
    {
        s.push( i ) ;
    }

    //Now the stack is {0, 1, 2, 3, 4}

    //size of stack s
    cout<<”Size of stack is:  <<s.size( )<<endl;

    //accessing top element from stack, it will be the last inserted element.
    cout<<”Top element of stack is:  <<s.top( ) <<endl ;

    //Now deleting all elements from stack 
    for(int i = 0;i < 5;i++)
    {
        s.pop( );
    }

    //Now stack is empty,so empty( ) function will return true.

    if(s.empty())
    {
        cout <<”Stack is empty.”<<endl;
    }
    else
    {
        cout <<”Stack is Not empty.”<<endl;
    }

    return 0;

}
Output:
Size of stack is: 5
Top element of stack is: 4
Stack is empty.
Queues:
Queue is a container which follows FIFO order (First In First Out) . Here elements are inserted at one end (rear ) and extracted from another end(front) .
Declaration:
queue <int> q;
Some member function of Queues are:
push( ): inserts an element in queue at one end(rear ). Its time complexity is O(1).
pop( ): deletes an element from another end if queue(front). Its time complexity is O(1).
front( ): access the element on the front end of queue. Its time complexity is O(1).
empty( ): checks if the queue is empty or not. Its time complexity is O(1).
size( ): returns the size of queue. Its time complexity is O(1).
Implementation:
#include <iostream>
#include <cstdio>
#include <queue>

using namespace std;

int main() {
    char qu[4] = {'a', 'b', 'c', 'd'};        
    queue <char> q;
    int N = 3;                            // Number of steps
    char ch;
    for(int i = 0;i < 4;++i)
        q.push(qu[i]);
    for(int i = 0;i < N;++i) {
            ch = q.front();
    q.push(ch);
            q.pop();
    }
    while(!q.empty()) {
        printf("%c", q.front());
        q.pop();
    }
    printf("\n");
    return 0;
}
Output:
dabc
Priority Queue:
A priority queue is a container that provides constant time extraction of the largest element, at the expense of logarithmic insertion. It is similar to the heap in which we can add element at any time but only the maximum element can be retrieved. In a priority queue, an element with high priority is served before an element with low priority.
Declaration:
priority_queue<int> pq;
Some member functions of priority queues are:
empty(): Returns true if the priority queue is empty and false if the priority queue has at least one element. Its time complexity is O(1).
pop(): Removes the largest element from the priority queue. Its time complexity is O(logN) where N is the size of the priority queue.
push(): Inserts a new element in the priority queue. Its time complexity is O(logN) where N is the size of the priority queue.
size(): Returns the number of element in the priority queue. Its time complexity is O(1).
top(): Returns a reference to the largest element in the priority queue. Its time complexity is O(1).
Implementation:
#include <iostream>
#include <queue>

using namespace std;

int main()
{
    priority_queue<int> pq;
    pq.push(10);
    pq.push(20);
    pq.push(5);
    while(!pq.empty())
    {
        cout << pq.top() << endl;
        pq.pop();
    }
    return 0;
}
Output:
20
10
5

Wednesday 13 January 2016

Interveiw prepration material



Interview Prepration Material



As a software developer, you certainly have a high-level picture of how web apps work and what kinds of technologies are involved: the browser, HTTP, HTML, web server, request handlers, and so on.
In this article, we will take a deeper look at the sequence of events that take place when you visit a URL.
1. You enter a URL into the browser
It all starts here:
2. The browser looks up the IP address for the domain name
 image
The first step in the navigation is to figure out the IP address for the visited domain. The DNS lookup proceeds as follows:
  • Browser cache – The browser caches DNS records for some time. Interestingly, the OS does not tell the browser the time-to-live for each DNS record, and so the browser caches them for a fixed duration (varies between browsers, 2 – 30 minutes).
  • OS cache – If the browser cache does not contain the desired record, the browser makes a system call (gethostbyname in Windows). The OS has its own cache.
  • Router cache – The request continues on to your router, which typically has its own DNS cache.
  • ISP DNS cache – The next place checked is the cache ISP’s DNS server. With a cache, naturally.
  • Recursive search – Your ISP’s DNS server begins a recursive search, from the root nameserver, through the .com top-level nameserver, to Facebook’s nameserver. Normally, the DNS server will have names of the .com nameservers in cache, and so a hit to the root nameserver will not be necessary.
Here is a diagram of what a recursive DNS search looks like:
500px-An_example_of_theoretical_DNS_recursion_svg 
One worrying thing about DNS is that the entire domain like wikipedia.org or facebook.com seems to map to a single IP address. Fortunately, there are ways of mitigating the bottleneck:
  • Round-robin DNS is a solution where the DNS lookup returns multiple IP addresses, rather than just one. For example, facebook.com actually maps to four IP addresses.
  • Load-balancer is the piece of hardware that listens on a particular IP address and forwards the requests to other servers. Major sites will typically use expensive high-performance load balancers.
  • Geographic DNS improves scalability by mapping a domain name to different IP addresses, depending on the client’s geographic location. This is great for hosting static content so that different servers don’t have to update shared state.
  • Anycast is a routing technique where a single IP address maps to multiple physical servers. Unfortunately, anycast does not fit well with TCP and is rarely used in that scenario.
Most of the DNS servers themselves use anycast to achieve high availability and low latency of the DNS lookups.
3. The browser sends a HTTP request to the web server
You can be pretty sure that Facebook’s homepage will not be served from the browser cache because dynamic pages expire either very quickly or immediately (expiry date set to past).
So, the browser will send this request to the Facebook server:
GET http://facebook.com/ HTTP/1.1
Accept: application/x-ms-application, image/jpeg, application/xaml+xml, [...]
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; [...]
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Host: facebook.com
Cookie: datr=1265876274-[...]; locale=en_US; lsd=WW[...]; c_user=2101[...]
The GET request names the URL to fetch: “http://facebook.com/”. The browser identifies itself (User-Agent header), and states what types of responses it will accept (Accept and Accept-Encoding headers). The Connection header asks the server to keep the TCP connection open for further requests.
The request also contains the cookies that the browser has for this domain. As you probably already know, cookies are key-value pairs that track the state of a web site in between different page requests. And so the cookies store the name of the logged-in user, a secret number that was assigned to the user by the server, some of user’s settings, etc. The cookies will be stored in a text file on the client, and sent to the server with every request.
There is a variety of tools that let you view the raw HTTP requests and corresponding responses. My favorite tool for viewing the raw HTTP traffic is fiddler, but there are many other tools (e.g., FireBug) These tools are a great help when optimizing a site.
In addition to GET requests, another type of requests that you may be familiar with is a POST request, typically used to submit forms. A GET request sends its parameters via the URL (e.g.: http://robozzle.com/puzzle.aspx?id=85). A POST request sends its parameters in the request body, just under the headers.
The trailing slash in the URL “http://facebook.com/” is important. In this case, the browser can safely add the slash. For URLs of the form http://example.com/folderOrFile, the browser cannot automatically add a slash, because it is not clear whether folderOrFile is a folder or a file. In such cases, the browser will visit the URL without the slash, and the server will respond with a redirect, resulting in an unnecessary roundtrip.
4. The facebook server responds with a permanent redirect
This is the response that the Facebook server sent back to the browser request:
HTTP/1.1 301 Moved Permanently
Cache-Control: private, no-store, no-cache, must-revalidate, post-check=0,
      pre-check=0
Expires: Sat, 01 Jan 2000 00:00:00 GMT
Location: http://www.facebook.com/
P3P: CP="DSP LAW"
Pragma: no-cache
Set-Cookie: made_write_conn=deleted; expires=Thu, 12-Feb-2009 05:09:50 GMT;
      path=/; domain=.facebook.com; httponly
Content-Type: text/html; charset=utf-8
X-Cnection: close
Date: Fri, 12 Feb 2010 05:09:51 GMT
Content-Length: 0
The server responded with a 301 Moved Permanently response to tell the browser to go to “http://www.facebook.com/” instead of “http://facebook.com/”.
There are interesting reasons why the server insists on the redirect instead of immediately responding with the web page that the user wants to see.
One reason has to do with search engine rankings. See, if there are two URLs for the same page, say http://www.igoro.com/ and http://igoro.com/, search engine may consider them to be two different sites, each with fewer incoming links and thus a lower ranking. Search engines understand permanent redirects (301), and will combine the incoming links from both sources into a single ranking.
Also, multiple URLs for the same content are not cache-friendly. When a piece of content has multiple names, it will potentially appear multiple times in caches.
5. The browser follows the redirect
The browser now knows that “http://www.facebook.com/” is the correct URL to go to, and so it sends out another GET request:
GET http://www.facebook.com/ HTTP/1.1
Accept: application/x-ms-application, image/jpeg, application/xaml+xml, [...]
Accept-Language: en-US
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; [...]
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
Cookie: lsd=XW[...]; c_user=21[...]; x-referer=[...]
Host: www.facebook.com
The meaning of the headers is the same as for the first request.
6. The server ‘handles’ the request
The server will receive the GET request, process it, and send back a response.
This may seem like a straightforward task, but in fact there is a lot of interesting stuff that happens here – even on a simple site like my blog, let alone on a massively scalable site like facebook.
  • Web server software
    The web server software (e.g., IIS or Apache) receives the HTTP request and decides which request handler should be executed to handle this request. A request handler is a program (in ASP.NET, PHP, Ruby, …) that reads the request and generates the HTML for the response.
In the simplest case, the request handlers can be stored in a file hierarchy whose structure mirrors the URL structure, and so for example http://example.com/folder1/page1.aspx URL will map to file /httpdocs/folder1/page1.aspx. The web server software can also be configured so that URLs are manually mapped to request handlers, and so the public URL of page1.aspx could be http://example.com/folder1/page1.
  • Request handler
    The request handler reads the request, its parameters, and cookies. It will read and possibly update some data stored on the server. Then, the request handler will generate a HTML response.
One interesting difficulty that every dynamic website faces is how to store data. Smaller sites will often have a single SQL database to store their data, but sites that store a large amount of data and/or have many visitors have to find a way to split the database across multiple machines. Solutions include sharding (splitting up a table across multiple databases based on the primary key), replication, and usage of simplified databases with weakened consistency semantics.
One technique to keep data updates cheap is to defer some of the work to a batch job. For example, Facebook has to update the newsfeed in a timely fashion, but the data backing the “People you may know” feature may only need to be updated nightly (my guess, I don’t actually know how they implement this feature). Batch job updates result in staleness of some less important data, but can make data updates much faster and simpler.
7. The server sends back a HTML response
image
Here is the response that the server generated and sent back:
HTTP/1.1 200 OK
Cache-Control: private, no-store, no-cache, must-revalidate, post-check=0,
    pre-check=0
Expires: Sat, 01 Jan 2000 00:00:00 GMT
P3P: CP="DSP LAW"
Pragma: no-cache
Content-Encoding: gzip
Content-Type: text/html; charset=utf-8
X-Cnection: close
Transfer-Encoding: chunked
Date: Fri, 12 Feb 2010 09:05:55 GMT

2b3
��������Tn@����[...]
The entire response is 36 kB, the bulk of them in the byte blob at the end that I trimmed.
The Content-Encoding header tells the browser that the response body is compressed using the gzip algorithm. After decompressing the blob, you’ll see the HTML you’d expect:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"   
      "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"
      lang="en" id="facebook" class=" no_js">
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8" />
<meta http-equiv="Content-language" content="en" />
...
In addition to compression, headers specify whether and how to cache the page, any cookies to set (none in this response), privacy information, etc.
Notice the header that sets Content-Type to text/html. The header instructs the browser to render the response content as HTML, instead of say downloading it as a file. The browser will use the header to decide how to interpret the response, but will consider other factors as well, such as the extension of the URL.
8. The browser begins rendering the HTML
Even before the browser has received the entire HTML document, it begins rendering the website:
 image
9. The browser sends requests for objects embedded in HTML
image
As the browser renders the HTML, it will notice tags that require fetching of other URLs. The browser will send a GET request to retrieve each of these files.
Here are a few URLs that my visit to facebook.com retrieved:
  • Images
    http://static.ak.fbcdn.net/rsrc.php/z12E0/hash/8q2anwu7.gif
    http://static.ak.fbcdn.net/rsrc.php/zBS5C/hash/7hwy7at6.gif
  • CSS style sheets
    http://static.ak.fbcdn.net/rsrc.php/z448Z/hash/2plh8s4n.css
    http://static.ak.fbcdn.net/rsrc.php/zANE1/hash/cvtutcee.css
  • JavaScript files
    http://static.ak.fbcdn.net/rsrc.php/zEMOA/hash/c8yzb6ub.js
    http://static.ak.fbcdn.net/rsrc.php/z6R9L/hash/cq2lgbs8.js
Each of these URLs will go through process a similar to what the HTML page went through. So, the browser will look up the domain name in DNS, send a request to the URL, follow redirects, etc.
However, static files – unlike dynamic pages – allow the browser to cache them. Some of the files may be served up from cache, without contacting the server at all. The browser knows how long to cache a particular file because the response that returned the file contained an Expires header. Additionally, each response may also contain an ETag header that works like a version number – if the browser sees an ETag for a version of the file it already has, it can stop the transfer immediately.
Can you guess what “fbcdn.net” in the URLs stands for? A safe bet is that it means “Facebook content delivery network”. Facebook uses a content delivery network (CDN) to distribute static content – images, style sheets, and JavaScript files. So, the files will be copied to many machines across the globe.
Static content often represents the bulk of the bandwidth of a site, and can be easily replicated across a CDN. Often, sites will use a third-party CDN provider, instead of operating a CND themselves. For example, Facebook’s static files are hosted by Akamai, the largest CDN provider.
As a demonstration, when you try to ping static.ak.fbcdn.net, you will get a response from an akamai.net server. Also, interestingly, if you ping the URL a couple of times, may get responses from different servers, which demonstrates the load-balancing that happens behind the scenes.
10. The browser sends further asynchronous (AJAX) requests
image
In the spirit of Web 2.0, the client continues to communicate with the server even after the page is rendered.
For example, Facebook chat will continue to update the list of your logged in friends as they come and go. To update the list of your logged-in friends, the JavaScript executing in your browser has to send an asynchronous request to the server. The asynchronous request is a programmatically constructed GET or POST request that goes to a special URL. In the Facebook example, the client sends a POST request to http://www.facebook.com/ajax/chat/buddy_list.php to fetch the list of your friends who are online.
This pattern is sometimes referred to as “AJAX”, which stands for “Asynchronous JavaScript And XML”, even though there is no particular reason why the server has to format the response as XML. For example, Facebook returns snippets of JavaScript code in response to asynchronous requests.
Among other things, the fiddler tool lets you view the asynchronous requests sent by your browser. In fact, not only you can observe the requests passively, but you can also modify and resend them. The fact that it is this easy to “spoof” AJAX requests causes a lot of grief to developers of online games with scoreboards. (Obviously, please don’t cheat that way.)
Facebook chat provides an example of an interesting problem with AJAX: pushing data from server to client. Since HTTP is a request-response protocol, the chat server cannot push new messages to the client. Instead, the client has to poll the server every few seconds to see if any new messages arrived.
Long polling is an interesting technique to decrease the load on the server in these types of scenarios. If the server does not have any new messages when polled, it simply does not send a response back. And, if a message for this client is received within the timeout period, the server will find the outstanding request and return the message with the response.
Another answer:-
In an extremely rough and simplified sketch, assuming the simplest possible HTTP request, no proxies, IPv4 and no problems in any step:
1.     browser checks cache; if requested object is in cache and is fresh, skip to #9
2.     browser asks OS for server's IP address
3.     OS makes a DNS lookup and replies the IP address to the browser
4.     browser opens a TCP connection to server (this step is much more complex with HTTPS)
5.     browser sends the HTTP request through TCP connection
6.     browser receives HTTP response and may close the TCP connection, or reuse it for another request
7.     browser checks if the response is a redirect or a conditional response (3xx result status codes), authorization request (401), error (4xx and 5xx), etc.; these are handled differently from normal responses (2xx)
8.     if cacheable, response is stored in cache
9.     browser decodes response (e.g. if it's gzipped)
10.   Browser determines what to do with response (e.g. is it a HTML page, is it an image, is it a sound clip?)
11.   browser renders response, or offers a download dialog for unrecognized types
Again, discussion of each of these points has filled countless pages; take this only as a short summary. Also, there are many other things happening in parallel to this (processing typed-in address, speculative prefetching, adding page to browser history, displaying progress to user, notifying plug-in and extensions, rendering the page while it's downloading, pipelining, connection tracking for keep-alive, checking for malicious content etc.) - and the whole operation gets an order of magnitude more complex with HTTPS (certificates and ciphers and pinning, oh my!).


===========================================

What is a process and process table? What are different states of process
---------------------------------------------------------------------------
A process is an instance of program in execution. For example a Web Browser is a process, a shell (or command prompt) is a process.
The operating system is responsible for managing all the processes that are running on a computer and allocated each process a certain amount of time to use the processor. In addition, the operating system also allocates various other resources that processes will need such as computer memory or disks. To keep track of the state of all the processes, the operating system maintains a table known as the process table. Inside this table, every process is listed along with the resources the processes is using and the current state of the process.
Processes can be in one of three states: running, ready, or waiting. The running state means that the process has all the resources it need for execution and it has been given permission by the operating system to use the processor. Only one process can be in the running state at any given time. The remaining processes are either in a waiting state (i.e., waiting for some external event to occur such as user input or a disk access) or a ready state (i.e., waiting for permission to use the processor). In a real operating system, the waiting and ready states are implemented as queues which hold the processes in these states. The animation below shows a simple representation of the life cycle of a process (Source: http://courses.cs.vt.edu/csonline/OS/Lessons/Processes/index.html)
===========================================================================================

What is a Thread? What are the differences between process and thread?
----------------------------------------------------------------------
A thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. Threads are popular way to improve application through parallelism. For example, in a browser, multiple tabs can be different threads. MS word uses multiple threads, one thread to format the text, other thread to process inputs, etc.
A thread has its own program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section and OS resources like open files and signals. See http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/threads.htm for more details.
=======================================================================
What is deadlock?
Deadlock is a situation when two or more processes wait for each other to finish and none of them ever finish.  Consider an example when two trains are coming toward each other on same track and there is only one track, none of the trains can move once they are in front of each other.  Similar situation occurs in operating systems when there are two or more processes hold some resources and wait for resources held by other(s).
=================================================================
What are the necessary conditions for deadlock?
--------------------------------------------------
Mutual Exclusion: There is s resource that cannot be shared.
Hold and Wait: A process is holding at least one resource and waiting for another resource which is with some other process.
No Preemption: The operating system is not allowed to take a resource back from a process until process gives it back.
Circular Wait:  A set of processes are waiting for each other in circular form.
==================================================================
What is Virtual Memory? How is it implemented?
----------------------------------------------
Virtual memory creates an illusion that each user has one or more contiguous address spaces, each beginning at address zero. The sizes of such virtual address spaces is generally very high.
The idea of virtual memory is to use disk space to extend the RAM. Running processes don’t need to care whether the memory is from RAM or disk. The illusion of such a large amount of memory is created by subdividing the virtual memory into smaller pieces, which can be loaded into physical memory whenever they are needed by a process.
==================================================================
What is Thrashing?
----------------------
Thrashing is a situation when the performance of a computer degrades or collapses. Thrashing occurs when a system spends more time processing page faults than executing transactions. While processing page faults is necessary to in order to appreciate the benefits of virtual memory, thrashing has a negative affect on the system. As the page fault rate increases, more transactions need processing from the paging device. The queue at the paging device increases, resulting in increased service time for a page fault (Source: http://cs.gmu.edu/cne/modules/vm/blue/thrash.html)
==================================================================
What is Belady’s Anomaly?
------------------------
Bélády’s anomaly is an anomaly with some page replacement policies where increasing the number of page frames results in an increase in the number of page faults. It occurs with First in First Out page replacement is used. See the wiki page for an example and more details.


====================================
Mutex vs. Semaphore, what is the difference?

The Toilet Example  (c) Copyright 2005, Niclas Winquist ;)

Mutex:

Is a key to a toilet. One person can have the key - occupy the toilet - at the time. When finished, the person gives (frees) the key to the next person in the queue.

Officially: "Mutexes are typically used to serialise access to a section of  re-entrant code that cannot be executed concurrently by more than one thread. A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section."
Ref: Symbian Developer Library

(A mutex is really a semaphore with value 1.) (*


Semaphore:

Is the number of free identical toilet keys. Example, say we have four toilets with identical locks and keys. The semaphore count - the count of keys - is set to 4 at beginning (all four toilets are free), then the count value is decremented as people are coming in. If all toilets are full, ie. there are no free keys left, the semaphore count is 0. Now, when eq. one person leaves the toilet, semaphore is increased to 1 (one free key), and given to the next person in the queue.

Officially: "A semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore)."
Ref: Symbian Developer Library

========================================
semaphores and mutex provides synchronization services. In concurrency programming when two or more threads try to access a critical section we need a synchronization between them. This is required to avoid race conditions. Critical section and race condition are explained as

critical section:

critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. A critical section will usually terminate in fixed time, and a thread, task or process will have to wait a fixed time to enter it (aka bounded waiting). Some synchronization mechanism is required at the entry and exit of the critical section to ensure exclusive use, for example a semaphore.

Race condition:

A race condition is any situation in which the combined outcome of two or more threads of execution varies depending on the precise order in which the interleaved instructions of each are executed on the processor. For example, suppose you have two threads of execution in which one regularly increments a global variable (g_counter += 1;) and the other very occasionally zeroes it (g_counter = 0;). There is a race condition here if the increment cannot always be executed atomically (in other words, in a single instruction cycle).

Mutex:

Mutual exclusion lock: Block access to variables by other threads. This enforces exclusive access by a thread to a variable or set of variables. mutex is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there will be ownership associated with mutex, and only the owner can release the lock (mutex)

Semaphore:

Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). Think of it as an unsigned integer you can increment and decrement, but if you try to decrement it below zero, it doesn't wrap around like normal unsigned integers to, but instead it blocks the thread until another one increments it again. Semaphores usually have a maximum value; incrementing beyond this value has no effect. Decrementing the semaphore is called acquiring or locking it, incrementing is called releasing or unlocking.

Myth:

Couple of article says that "binary semaphore and mutex are same" or "Semaphore with value 1 is mutex" but the basic difference is Mutex can be released only by thread that had acquired it, while you can signal semaphore from any other thread

=======================================

Difference Between Mutex vs Semaphore
----------------------------------------
 This is one of the most famous question of Microsoft interviews and Engineering vivas and one must know bit of internals as well

Mutex: These are typically used to serialize access to a section of  re-entrant code that cannot be executed concurrently by more than one thread. A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section.


It is locking mechanism used to synchronize access to a resource. Only one task\ Thread can acquire the mutex. It means there will be ownership associated with mutex, and only the owner can release the lock (mutex) unless its a recurring mutex else it will result deadlock

Famous Toilet example on Mutex: Is a key to a toilet. One person can have the key - occupy the toilet - at the time. When finished, the person gives (frees) the key to the next person in the queue.



Semaphore: This  restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource.

It is signaling mechanism (“I am done, you can carry on” kind of signal). In case another thread  want to use resource an interrupt will be triggered upon which an interrupt service routine (ISR) will signal the call processing task to wakeup.

Famous Toilet example on Semaphore:  say we have four toilets with identical locks and keys. The semaphore count - the count of keys - is set to 4 at beginning (all four toilets are free), then the count value is decremented as people are coming in. If all toilets are full, ie. there are no free keys left, the semaphore count is 0. Now, when eq. one person leaves the toilet, semaphore is increased to 1 (one free key), and given to the next person in the queue.



Key Points:
A thread can acquire more than one lock (Mutex).
A mutex can be locked more than once only if its a recursive mutex, here lock and unlock for mutex should be same
If a thread which had already locked a mutex, tries to lock the mutex again, it will enter into the waiting list of that mutex, which results in deadlock.
Binary semaphore and mutex are similar but not same.
Mutex is costly operation due to protection protocols associated with it.
Main aim of mutex is achieve atomic access or lock on resource

Myth:

Couple of article says that "binary semaphore and mutex are same" or "Semaphore with value 1 is mutex" but the basic difference is Mutex can be released only by thread that had acquired it, while you can signal semaphore from any other thread
=======================================================

Mutex vs Semaphore
What are the differences between Mutex vs Semaphore? When to use mutex and when to use semaphore?

Concrete understanding of Operating System concepts is required to design/develop smart applications. Our objective is to educate  the reader on these concepts and learn from other expert geeks.

As per operating system terminology, the mutex and semaphore are kernel resources that provide synchronization services (also called as synchronization primitives). Why do we need such synchronization primitives? Won’t be only one sufficient? To answer these questions, we need to understand few keywords. Please read the posts on atomicity and critical section. We will illustrate with examples to understand these concepts well, rather than following usual OS textual description.

The producer-consumer problem:

Note that the content is generalized explanation. Practical details will vary from implementation.

Consider the standard producer-consumer problem. Assume, we have a buffer of 4096 byte length. A producer thread will collect the data and writes it to the buffer. A consumer thread will process the collected data from the buffer. Objective is, both the threads should not run at the same time.

Using Mutex:

A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.

At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.

Using Semaphore:

A semaphore is a generalized mutex. In lieu of single buffer, we can split the 4 KB buffer into four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers. The consumer and producer can work on different buffers at the same time.

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The Producer/Consumer Problem:-
-----------------------------------
This problem is one of the small collection of standard, well-known problems in concurrent programming: a finite-size buffer and two classes of threads, producers and consumers, put items into the buffer (producers) and take items out of the buffer (consumers).

A producer must wait until the buffer has space before it can put something in, and a consumer must wait until something is in the buffer before it can take something out.

A condition variable represents a queue of threads waiting for some condition to be signaled.

Example 4-11 has two such queues, one (less) for producers waiting for a slot in the buffer, and the other (more) for consumers waiting for a buffer slot containing information. The example also has a mutex, as the data structure describing the buffer must be accessed by only one thread at a time.

Example 4-11 The Producer/Consumer Problem and Condition Variables

typedef struct {
    char buf[BSIZE];
    int occupied;
    int nextin;
    int nextout;
    pthread_mutex_t mutex;
    pthread_cond_t more;
    pthread_cond_t less;
} buffer_t;

buffer_t buffer;

As Example 4-12 shows, the producer thread acquires the mutex protecting the buffer data structure and then makes certain that space is available for the item being produced. If not, it calls pthread_cond_wait(), which causes it to join the queue of threads waiting for the condition less, representing there is room in the buffer, to be signaled.

At the same time, as part of the call to pthread_cond_wait(), the thread releases its lock on the mutex. The waiting producer threads depend on consumer threads to signal when the condition is true (as shown in Example 4-12). When the condition is signaled, the first thread waiting on less is awakened. However, before the thread can return from pthread_cond_wait(), it must acquire the lock on the mutex again.

This ensures that it again has mutually exclusive access to the buffer data structure. The thread then must check that there really is room available in the buffer; if so, it puts its item into the next available slot.

At the same time, consumer threads might be waiting for items to appear in the buffer. These threads are waiting on the condition variable more. A producer thread, having just deposited something in the buffer, calls pthread_cond_signal() to wake up the next waiting consumer. (If there are no waiting consumers, this call has no effect.)

Finally, the producer thread unlocks the mutex, allowing other threads to operate on the buffer data structure.
Example 4-12 The Producer/Consumer Problem--the Producer

void producer(buffer_t *b, char item)
{
    pthread_mutex_lock(&b->mutex);
  
    while (b->occupied >= BSIZE)
        pthread_cond_wait(&b->less, &b->mutex);

    assert(b->occupied < BSIZE);

    b->buf[b->nextin++] = item;

    b->nextin %= BSIZE;
    b->occupied++;

    /* now: either b->occupied < BSIZE and b->nextin is the index
       of the next empty slot in the buffer, or
       b->occupied == BSIZE and b->nextin is the index of the
       next (occupied) slot that will be emptied by a consumer
       (such as b->nextin == b->nextout) */

    pthread_cond_signal(&b->more); // this function will unlock the threads blocked on condition variable (b->more in our case), means it will unlock consumer thread which are waiting to read from buffer.

    pthread_mutex_unlock(&b->mutex);
}

Note the use of the assert() statement; unless the code is compiled with NDEBUG defined, assert() does nothing when its argument evaluates to true (that is, nonzero), but causes the program to abort if the argument evaluates to false (zero). Such assertions are especially useful in multithreaded programs--they immediately point out runtime problems if they fail, and they have the additional effect of being useful comments.

The comment that begins /* now: either b->occupied ... could better be expressed as an assertion, but it is too complicated as a Boolean-valued expression and so is given in English.

Both the assertion and the comments are examples of invariants. These are logical statements that should not be falsified by the execution of the program, except during brief moments when a thread is modifying some of the program variables mentioned in the invariant. (An assertion, of course, should be true whenever any thread executes it.)

Using invariants is an extremely useful technique. Even if they are not stated in the program text, think in terms of invariants when you analyze a program.

The invariant in the producer code that is expressed as a comment is always true whenever a thread is in the part of the code where the comment appears. If you move this comment to just after the mutex_unlock(), this does not necessarily remain true. If you move this comment to just after the assert(), this is still true.

The point is that this invariant expresses a property that is true at all times, except when either a producer or a consumer is changing the state of the buffer. While a thread is operating on the buffer (under the protection of a mutex), it might temporarily falsify the invariant. However, once the thread is finished, the invariant should be true again.

Example 4-13 shows the code for the consumer. Its flow is symmetric with that of the producer.
Example 4-13 The Producer/Consumer Problem--the Consumer

char consumer(buffer_t *b)
{
    char item;
    pthread_mutex_lock(&b->mutex);
    while(b->occupied <= 0)
        pthread_cond_wait(&b->more, &b->mutex);

    assert(b->occupied > 0);

    item = b->buf[b->nextout++];
    b->nextout %= BSIZE;
    b->occupied--;

    /* now: either b->occupied > 0 and b->nextout is the index
       of the next occupied slot in the buffer, or
       b->occupied == 0 and b->nextout is the index of the next
       (empty) slot that will be filled by a producer (such as
       b->nextout == b->nextin) */

    pthread_cond_signal(&b->less); //this function will unlock the threads blocked on condition variable (b->less in our case), means it will unlock producer thread to write in buffer
    pthread_mutex_unlock(&b->mutex);

    return(item);
}


---------------------------------------------------------------------------------------------------------------------------------------------------




Misconception:

There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore.

Strictly speaking, a mutex is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there will be ownership associated with mutex, and only the owner can release the lock (mutex).

Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend called you, an interrupt will be triggered upon which an interrupt service routine (ISR) will signal the call processing task to wakeup.

General Questions:

1. Can a thread acquire more than one lock (Mutex)?

Yes, it is possible that a thread will be in need of more than one resource, hence the locks. If any lock is not available the thread will wait (block) on the lock.

2. Can a mutex be locked more than once?

A mutex is a lock. Only one state (locked/unlocked) is associated with it. However, a recursive mutex can be locked more than once (POSIX complaint systems), in which a count is associated with it, yet retains only one state (locked/unlocked). The programmer must unlock the mutex as many number times as it was locked.

3. What will happen if a non-recursive mutex is locked more than once.

Deadlock. If a thread which had already locked a mutex, tries to lock the mutex again, it will enter into the waiting list of that mutex, which results in deadlock. It is because no other thread can unlock the mutex. An operating system implementer can exercise care in identifying the owner of mutex and return if it is already locked by same thread to prevent deadlocks.

4. Are binary semaphore and mutex same?

No. We will suggest to treat them separately, as it was explained signalling vs locking mechanisms. But a binary semaphore may experience the same critical issues (e.g. priority inversion) associated with mutex. We will cover these later article.

A programmer can prefer mutex rather than creating a semaphore with count 1.

5. What is a mutex and critical section?

Some operating systems use the same word critical section in the API. Usually a mutex is costly operation due to protection protocols associated with it. At last, the objective of mutex is atomic access. There are other ways to achieve atomic access like disabling interrupts which can be much faster but ruins responsiveness. The alternate API makes use of disabling interrupts.

6. What are events?

The semantics of mutex, semaphore, event, critical section, etc… are same. All are synchronization primitives. Based on their cost in using them they are different. We should consult the OS documentation for exact details.

7. Can we acquire mutex/semaphore in an Interrupt Service Routine?

An ISR will run asynchronously in the context of current running thread. It is not recommended to query (blocking call) the availability of synchronization primitives in an ISR. The ISR are meant be short, the call to mutex/semaphore may block the current running thread. However, an ISR can signal a semaphore or unlock a mutex.

8. What we mean by “thread blocking on mutex/semaphore” when they are not available?

Every synchronization primitive will have waiting list associated with it. When the resource is not available, the requesting thread will be moved from the running list of processor to the waiting list of the synchronization primitive. When the resource is available, the higher priority thread on the waiting list will get resource (more precisely, it depends on the scheduling policies).

9. Is it necessary that a thread must block always when resource is not available?

Not necessarily. If the design is sure ‘what has to be done when resource is not available‘, the thread can take up that work (a different code branch). To support application requirements the OS provides non-blocking API.

For example POSIX pthread_mutex_trylock() API. When the mutex is not available the function will return immediately where as the API pthread_mutex_lock() will block the thread till resource is available.

========================

Volatile keyword in C:-
---------------------
Another use for volatile is signal handlers. If you have code like this:

quit = 0;
while (!quit)
{
    /* very small loop which is completely visible to the compiler */
}

The compiler is allowed to notice the loop body does not touch the quit variable and convert the loop to a while (true) loop. Even if the quit variable is set on the signal handler for SIGINT and SIGTERM; the compiler has no way to know that.

However, if the quit variable is declared volatile, the compiler is forced to load it every time, because it can be modified elsewhere. This is exactly what you want in this situation.

=-==================================================

volatile in C actually came into existence for the purpose of not cacheing the values of the variable automatically. It will tell the machine not to cache the value of this variable. So it will take the value of the given volatile variable from the main memory every time it encounters it. This mechanism is used because at any time the value can be modified by the OS or any interrupt. So using volatile will help us accessing the value afresh every time.

======================================================================

What is virtual memory, how is it implemented, and why do operating systems use it?
-----------------------------------------------------------------------------------
Real, or physical, memory exists on RAM chips inside the computer. Virtual memory, as its name suggests, doesn’t physically exist on a memory chip. It is an optimization technique and is implemented by the operating system in order to give an application program the impression that it has more memory than actually exists. Virtual memory is implemented by various operating systems such as Windows, Mac OS X, and Linux.

So how does virtual memory work? Let’s say that an operating system needs 120 MB of memory in order to hold all the running programs, but there’s currently only 50 MB of available physical memory stored on the RAM chips. The operating system will then set up 120 MB of virtual memory, and will use a program called the virtual memory manager (VMM) to manage that 120 MB. The VMM will create a file on the hard disk that is 70 MB (120 – 50) in size to account for the extra memory that’s needed. The O.S. will now proceed to address memory as if there were actually 120 MB of real memory stored on the RAM, even though there’s really only 50 MB. So, to the O.S., it now appears as if the full 120 MB actually exists. It is the responsibility of the VMM to deal with the fact that there is only 50 MB of real memory.
The paging file and the RAM

Now, how does the VMM function? As mentioned before, the VMM creates a file on the hard disk that holds the extra memory that is needed by the O.S., which in our case is 70 MB in size. This file is called a paging file (also known as a swap file), and plays an important role in virtual memory. The paging file combined with the RAM accounts for all of the memory. Whenever the O.S. needs a ‘block’ of memory that’s not in the real (RAM) memory, the VMM takes a block from the real memory that hasn’t been used recently, writes it to the paging file, and then reads the block of memory that the O.S. needs from the paging file. The VMM then takes the block of memory from the paging file, and moves it into the real memory – in place of the old block. This process is called swapping (also known as paging), and the blocks of memory that are swapped are called pages. The group of pages that currently exist in RAM, and that are dedicated to a specific process, is known as the working set for that process.


As mentioned earlier, virtual memory allows us to make an application program think that it has more memory than actually exists. There are two reasons why one would want this: the first is to allow the use of programs that are too big to physically fit in memory. The other reason is to allow for multitasking – multiple programs running at once. Before virtual memory existed, a word processor, e-mail program, and browser couldn’t be run at the same time unless there was enough memory to hold all three programs at once. This would mean that one would have to close one program in order to run the other, but now with virtual memory, multitasking is possible even when there is not enough memory to hold all executing programs at once.

Virtual Memory Can Slow Down Performance
However, virtual memory can slow down performance. If the size of virtual memory is quite large in comparison to the real memory, then more swapping to and from the hard disk will occur as a result. Accessing the hard disk is far slower than using system memory. Using too many programs at once in a system with an insufficient amount of RAM results in constant disk swapping – also called thrashing, which can really slow down a system’s performance.

====================================================================================
Suppose we have a paging system with 4 frames and 12 pages, where the number of frames denotes the number of pages that can be held in RAM at any given time. Assume the pages are accessed by some program in the order shown below, from left to right. Also, assume that the program has just started, so the frames are initially empty. How many page faults will be generated assuming that the LRU (Least Recently Used) algorithm is being used?

Order in which pages are accessed:
3, 4, 2, 1, 4, 7, 2, 5, 3, 6, 1, 3
-------------------------------------

Reading the previous discussion on virtual memory is recommended to better understand this problem. A page fault occurs when a program tries to access a page that is mapped in address space, but not loaded in the physical memory (the RAM). In other words, a page fault occurs when a program can not find a page that it’s looking for in the physical memory, which means that the program would have to access the paging file (which resides on the hard disk) to retrieve the desired page.

The term page fault is a bit misleading as it implies that something went seriously wrong. Although page faults are undesirable – as they result in slow accesses to the hard disk – they are quite common in any operating system that uses virtual memory.

Now, we need to actually solve the problem. The easiest way to do this is to break the problem down into 12 steps (where 12 is the number of pages) to see what happens each time a page is referenced by the program, and at each step see whether a page fault is generated or not. Of course, we want to keep track of what pages are currently in the physical memory (the RAM). The first four page accesses will result in page faults because the frames are initially empty. After that, if the program tries to access a page that’s already in one of the frames then there’s no problem. But if the page that the program is trying to access is not already in one of the frames then that results in a page fault. In this case, we have to determine which page we want to take out (or ‘swap’) from the RAM, and for that we use the LRU algorithm.


Some other algorithm could be used as well – FIFO and NRU are other possibilities – and as a group these are known as page replacement algorithms. Applying the LRU algorithm to this problem is fairly straightforward – you simply remove the page that was least recently used. Proceeding in this manner leads to the chart shown below – you should try this out yourself before looking at the answer.
Page referenced     Page Fault     Resulting List
3     Y     3
4     Y     3,4
2     Y     3,4,2
1     Y     3,4,2,1
4     N     3,2,1,4
7     Y     2,1,4,7
2     N     1,4,7,2
5     Y     4,7,2,5
3     Y     7,2,5,3
6     Y     2,5,3,6
1     Y     5,3,6,1
3     N     5,6,1,3

We can see that 9 page faults will be generated in this scenario.
=====================================================================================

What is the purpose of swapping in virtual memory?
---------------------------------------------------

It will help a lot to read this question before going forward.
Swapping is exchanging data between the hard disk and the RAM

The goal of the virtual memory technique is to make an application think that it has more memory than actually exists. If you read the recommended question then you know that the virtual memory manager (VMM) creates a file on the hard disk called a swap file. Basically, the swap file (also known as a paging file) allows the application to store any extra data that can’t be stored in the RAM – because the RAM has limited memory. Keep in mind that an application program can only use the data when it’s actually in the RAM. Data can be stored in the paging file on the hard disk, but it is not usable until that data is brought into the RAM. Together, the data being stored on the hard disk combined with the data being stored in the RAM comprise the entire data set needed by the application program.

So, the way virtual memory works is that whenever a piece of data needed by an application program cannot be found in the RAM, then the program knows that the data must be in the paging file on the hard disk.

But in order for the program to be able to access that data, it must transfer that data from the hard disk into the RAM. This also means that a piece of existing data in the RAM must be moved to the hard disk in order to make room for the data that it wants to bring in from the hard disk. So, you can think of this process as a trade in which an old piece of data is moved from the RAM to the hard disk in exchange for a ‘new’ piece of data to bring into the RAM from the hard disk. This trade is known as swapping or paging. Another term used for this is a ‘page fault’ – which occurs when an application program tries to access a piece of data that is not currently in the RAM, but is in the paging file on the hard disk. Remember that page faults are not desirable since they cause expensive accesses to the hard disk. Expensive in this context means that accessing the hard disk is slow and takes time.
The Purpose Of Swapping
-------------------------
So, we can say that the purpose of swapping, or paging, is to access data being stored in hard disk and to bring it into the RAM so that it can be used by the application program. Remember that swapping is only necessary when that data is not already in the RAM.
Excessive Swapping Causes Thrashing

Excessive use of swapping is called thrashing and is undesirable because it lowers overall system performance, mainly because hard drives are far slower than RAM.

=====================================================================================================

Best case to worst case complexity sequence:-
---------------------------------------------
O(1) -- constant time
O(log N) --Logarthmic time
O(N) ---Linear time
O(NlogN) -- Quasilinear time
O(N^2) -- Quadratic time
O(N^3) -- cubic time
O(2^N) -- Exponential time

=====================================================================================================

Name Mangling and extern “C” in C++

C++ supports function overloading, i.e., there can be more than one functions with same name and differences in parameters. How does C++ compiler distinguishes between different functions when it generates object code – it changes names by adding information about arguments. This technique of adding additional information to function names is called Name Mangling. C++ standard doesn’t specify any particular technique for name mangling, so different compilers may append different information to function names.

Consider following declarations of function f()
int  f (void) { return 1; }
int  f (int)  { return 0; }
void g (void) { int i = f(), j = f(0); }

A C++ compiler may mangle above names to following (Source: Wiki)
int  __f_v (void) { return 1; }
int  __f_i (int)  { return 0; }
void __g_v (void) { int i = __f_v(), j = __f_i(0); }



How to handle C symbols when linking from C++?
In C, names may not be mangled as C doesn’t support function overloading. So how to make sure that name of a symbol is not changed when we link a C code in C++. For example, see the following C++ program that uses printf() function of C.
// Save file as .cpp and use C++ compiler to compile it
int printf(const char *format,...);

int main()
{
    printf("GeeksforGeeks");
    return 0;
}

Output:

undefined reference to `printf(char const*, ...)'
        ld returned 1 exit status

The reason for compiler error is simple, name of printf is changed by C++ compiler and it doesn’t find definition of the function with new name.

The solution of problem is extern “C” in C++. When some code is put in extern “C” block, the C++ compiler ensures that the function names are unmangled – that the compiler emits a binary file with their names unchanged, as a C compiler would do.

If we change the above program to following, the program works fine and prints “GeeksforGeeks” on console.
// Save file as .cpp and use C++ compiler to compile it
extern "C"
{
    int printf(const char *format,...);
}

int main()
{
    printf("GeeksforGeeks");
    return 0;
}

Output:

GeeksforGeeks

Therefore, all C style header files (stdio.h, string.h, .. etc) have their declarations in extern “C” block.
#ifdef __cplusplus
extern "C" {
#endif
    /* Declarations of this file */
#ifdef __cplusplus
}
#endif

Following are main points discussed above
1. Since C++ supports function overloading, additional information has to be added to function names (called name mangling) to avoid conflicts in binary code.
2. Function names may not be changed in C as C doesn’t support function overloading. To avoid linking problems, C++ supports extern “C” block. C++ compiler makes sure that names inside extern “C” block are not changed.

Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

==================================================================================================================================================================================================


Function Pointer in C
-------------------------
In C, like normal data pointers (int *, char *, etc), we can have pointers to functions. Following is a simple example that shows declaration and function call using function pointer.
#include <stdio.h>
// A normal function with an int parameter
// and void return type
void fun(int a)
{
    printf("Value of a is %d\n", a);
}

int main()
{
    // fun_ptr is a pointer to function fun()
    void (*fun_ptr)(int) = &fun;

    /* The above line is equivalent of following two
       void (*fun_ptr)(int);
       fun_ptr = &fun;
    */

    // Invoking fun() using fun_ptr
    (*fun_ptr)(10);

    return 0;
}

Output:

Value of a is 10

Why do we need an extra bracket around function pointers like fun_ptr in above example?
If we remove bracket, then the expression “void (*fun_ptr)(int)” becomes “void *fun_ptr(int)” which is declaration of a function that returns void pointer. See following post for details.
How to declare a pointer to a function?

Following are some interesting facts about function pointers.


1) Unlike normal pointers, a function pointer points to code, not data. Typically a function pointer stores the start of executable code.


2) Unlike normal pointers, we do not allocate de-allocate memory using function pointers.


3) A function’s name can also be used to get functions’ address. For example, in the below program, we have removed address operator ‘&’ in assignment. We have also changed function call by removing *, the program still works.
#include <stdio.h>
// A normal function with an int parameter
// and void return type
void fun(int a)
{
    printf("Value of a is %d\n", a);
}

int main()
{
    void (*fun_ptr)(int) = fun;  // & removed

    fun_ptr(10);  // * removed

    return 0;
}

Output:

Value of a is 10


4) Like normal pointers, we can have an array of function pointers. Below example in point 5 shows syntax for array of pointers.


5) Function pointer can be used in place of switch case. For example, in below program, user is asked for a choice between 0 and 2 to do different tasks.
#include <stdio.h>
void add(int a, int b)
{
    printf("Addition is %d\n", a+b);
}
void subtract(int a, int b)
{
    printf("Subtraction is %d\n", a-b);
}
void multiply(int a, int b)
{
    printf("Multiplication is %d\n", a*b);
}

int main()
{
    // fun_ptr_arr is an array of function pointers
    void (*fun_ptr_arr[])(int, int) = {add, subtract, multiply};
    unsigned int ch, a = 15, b = 10;

    printf("Enter Choice: 0 for add, 1 for subtract and 2 "
            "for multiply\n");
    scanf("%d", &ch);

    if (ch > 2) return 0;

    (*fun_ptr_arr[ch])(a, b);

    return 0;
}

Enter Choice: 0 for add, 1 for subtract and 2 for multiply
2
Multiplication is 150

=========================================================

TCP Connection States:-
-----------------------------
Following is a brief explanation of this handshake. In this context the "client" is the peer requesting a connection and the "server" is the peer accepting a connection. Note that this notation does not reflect Client/Server relationships as an architectural principal.

    Connection Establishment

        The client sends a SYN message which contains the server's port and the client's Initial Sequence Number (ISN) to the server (active open).
        The server sends back its own SYN and ACK (which consists of the client's ISN + 1).
        The Client sends an ACK (which consists of the server's ISN + 1).
    Connection Tear-down (modified three way handshake).

        The client sends a FIN (active close). This is a now a half-closed connection. The client no longer sends data, but is still able to receive data from the server. Upon receiving this FIN, the server enters a passive close state.
        The server sends an ACK (which is the clients FIN sequence + 1)
        The server sends its own FIN.
        The client sends an ACK (which is server's FIN sequence + 1). Upon receiving this ACK, the server closes the connection.

A half-closed connection can be used to terminate sending data while sill receiving data. Socket applications can call shutdown with the second argument set to 1 to enter this state.

Netstat Output:-
----------------------------
The above TCP connection states can be monitored in a network trace under the TCP flags. It is also possible to determine the status of the connection by running the Netstat utility and looking at the State column. Netstat is shipped with Windows NT, Windows 95, and TCP/IP-32 for Windows for Workgroups.

State explanations as shown in Netstat:
State Explanation
------------ --------------------------------------------------------

SYN_SEND Indicates active open.

SYN_RECEIVED Server just received SYN from the client.

ESTABLISHED Client received server's SYN and session is established.

LISTEN Server is ready to accept connection.

NOTE: See documentation for listen() socket call. TCP sockets in listening state are not shown - this is a limitation of NETSTAT. For additional information, please see the following article in the Microsoft Knowledge Base:
134404 NETSTAT.EXE Does Not Show TCP Listen Sockets

FIN_WAIT_1 Indicates active close.

TIMED_WAIT Client enters this state after active close.

CLOSE_WAIT Indicates passive close. Server just received first FIN from a client.

FIN_WAIT_2 Client just received acknowledgment of its first FIN from the server.

LAST_ACK Server is in this state when it sends its own FIN.

CLOSED Server received ACK from client and connection is closed.
As an example, consider the following scenario:

A socket application has been terminated, but Netstat reports the socket in a CLOSE_WAIT state. This could indicate that the client properly closed the connection (FIN has been sent), but the server still has its socket open. This could be the result of one instance (among all threads or processes) of the socket not being closed.

NOTE: It is normal to have a socket in the TIME_WAIT state for a long period of time. The time is specified in RFC793 as twice the Maximum Segment Lifetime (MSL). MSL is specified to be 2 minutes. So, a socket could be in a TIME_WAIT state for as long as 4 minutes. Some systems implement different values (less than 2 minutes) for the MSL.

============================================================================================
implement your new sizeof operator using bit wise.
--------------------------------------------------
template <class T>
int sizeOf(T x = 0){
   
    return (char *)(&x + 1) - (char *)&x;
}

suppose x is integer, then &x represent its address suppose 1000 and adding 1 to this address will give next address to store same data type (1004).

===============================================================================================
Check two numbers are equal without using any comparison operator.

int compare (int i, int j) {
    return i ^ j;
  }
 
 second method:-
 if(!(a-b))
    printf("Equal\n");
else
    printf("Not Equal\n");
   
==================================================================================================

Given an int, write code to return the number of bits that are 1 in O(m) time, where m is the number of bits that are 1.

int numbits(unsigned int n)
{
    for( int i=0; n > 0 ; i++ ) n &= (n-1);
    return i;
}
=======================================================================================================

You have two numbers decomposed in binary representation, write a function that sums them and returns the result.

Input: 100011, 100100
Output: 1000111


Theory

sum = a xor b xor c
carry = ab + bc+ ac

Code

int addBinary(int a1[], int a2[], int result[]){
    int i, c = 0;
    for(i = 0; i < 8 ; i++){
        result[i] = ((a1[i] ^ a2[i]) ^ c); //a xor b xor c
        c = ((a1[i] & a2[i]) | (a1[i] &c)) | (a2[i] & c); //ab+bc+ca
    }
    result[i] = c;
    return c;
 }

 =========================================================

Consider the statement
result = a ? b : c;
Implement the above statement without using any conditional statements.

!!a * b + !a * c;

=======================================================

Assuming you have three N bit unsigned integers a, b and c, what is the min number of bits you would need to store the result of a * b + c?

Another easy way to understand this is : consider N to be 8 bits or 1 byte... An unsigned int of 1 byte can have a max value of 255. Now applying the given equation with the maximum values possible:

a*b + c = 255*255 + 255 = 65280

we know unsigned 8 bit value max is 255 . Similarly unsigned 16 bit value is 65536 and 65280 is less than 65536. Hence we need 16bits for storing the result or 2N bits.

===================================================================

Given two numbers "a" and "b" and an average formula (a+b)/2. Find one condition where it wont work. Also, give solution to it

suppose a and b are int....
and a and b = (2^31)-1;

so (a+b) will cause overflow and hence (a+b)/2 wnt work

so we need to use (a/2) + (b/2) keeping in mind if they are even or odd so that the fractional part is not lost....

int avg = (a/2) + (b/2) + ((a%2 + b%2)/2)

===================================================================================

Given a function, take a number and the bit position and return true if that bit is set to 1 and false otherwise.

bool ret_result(int number, int pos) {
int k=1;
for(int i=0;i<pos;i++) {
k=k<<1;
}
if(number&k > 0 ) {
return true;
}
else {
return false;
}
}

============================================================================

calculate no of zeros in given integer.
---------------------------------------
count = 0
while (n>0) {
count = count + !(n&1)
n=n>>1 //Right shift by 1, means divide by 2
}
return count

To find in decimal format the

count = 0
while (n>0) {
count = count + (n%10>0 ? 0 : 1)
n = n/10;
}
return count

ex: suppose n is 8 00001000
then first n&1 will give 0
then n >> 1 will give 4 00000100, then n&1 will give 0 again ...
now n >> 1 again will give 2 00000010, then n&1 will give 0 again....
now n >> 1 again will give 1 00000001 then n &1 will give 1....
============================================================================================

left shift and right shift:-
n left shift of any number  = number * 2 power of n.
means if a number is left shifted by 1 means ans will be number * 2.
append 0s at right side.

right shift:-
n left shift of any number  = number / 2 power of n.
means if a number is left shifted by 1 means ans will be number / 2.
append 0s at left side.

======================================================================

In the page 90 of Gayle's cracking the coding interview book, there was a method defined which is used to get the bit at particular position.

Method 1:-
left shift number 1 by given position-1 then 'and' this with number
boolean getBit(int num, int i) {
return ((num&(1<<i-1))!=0);
}

second method:-
right shit number by given position then 'and' this by 1.
suppose n = 8 (00001000)
position is 4.

k = n >> (4-1) //00000001
return (k&1) //00000001 & 00000001

=============================================================================

ip address passed as a string , convert it in to int.

#include<stdio.h>
#include<string.h>
int main()
{
unsigned ans=0;
char input[] = "10.1.10.12";
char *tok = strtok(input,".");
int i;
char delims[] = ".";
for(i=3;tok!=NULL;i--)
{
        ans |= atoi(tok)<<(8*i); // because of ipv4 ip address is of 32 bit
        tok = strtok(NULL,".");
}
printf("%u\n",ans);
}

=============================================================================

Write a function that returns true if the bit stream has an alternating pattern. For example 000000101010 would return true and 0000001010000 would return false. Also I had to write test cases for the function.

in a loop , number > 0 , just right shift the number by 1 and & this with 1 and check digit with toggle_flag each time, if they matches then return true.

=====================================================================================

find the index of the highest bit set of a 32-bit number (without loops obviously)

wiht loop:-
loop start with index=0, right shift the number by 1 and '&' this number with 1, and if result is 1, return value of index.

=====================================================================================

How do you determine if every digit in the binary representation of a number is a one?

example: 7 (0111) or 15 (1111)

if (n & (n+1) == 0) then return true.

ex: n 0111 and n+1 = 1000 , and these two you will get 0


second method:-
simply :
while (num > 0)
{
    if (num & n == 1)
    {
        num >> 1;
    }
    else
    {
        return false;
    }
}
return true;

========================================================================================

Count the no. of 1's in a 32 bit no. where there are mostly 0's in the number?

while(number)
{number &= (number-1); count++;}

======================================================================================

return MSB of a number..

return (!(!(a >> 31)))

10000000 10000000 ........32 so MSB will be left most digit, will get this by right shifting this num by 31 times.

=========================================================================================

swap variable without using temp:-

a = a^b
b= a^b
a=a^b

====================================================================================

C Program to find whether machine is little endian or big endian.

#include <stdio.h>
int main()
{
   unsigned int i = 1;
   char *c = (char*)&i;
   if (*c)   
       printf("Little endian");
   else
       printf("Big endian");
   getchar();
   return 0;
}

or

#include <stdio.h>
int main ()
{
  unsigned int x = 0x76543210;
  char *c = (char*) &x;

  printf ("*c is: 0x%x\n", *c);
  if (*c == 0x10)
  {
    printf ("Underlying architecture is little endian. \n");
  }
  else
  {
     printf ("Underlying architecture is big endian. \n");
  }

  return 0;
}

===========================================================

Little and Big Endian Mystery
What are these?
Little and big endian are two ways of storing multibyte data-types ( int, float, etc). In little endian machines, last byte of binary representation of the multibyte data-type is stored first. On the other hand, in big endian machines, first byte of binary representation of the multibyte data-type is stored first.

Suppose integer is stored as 4 bytes (For those who are using DOS based compilers such as C++ 3.0 , integer is 2 bytes) then a variable x with value 0x01234567 will be stored as following.



Memory representation of integer ox01234567 inside Big and little endian machines
How to see memory representation of multibyte data types on your machine?
Here is a sample C code that shows the byte representation of int, float and pointer.

#include <stdio.h>

/* function to show bytes in memory, from location start to start+n*/
void show_mem_rep(char *start, int n)
{
    int i;
    for (i = 0; i < n; i++)
         printf(" %.2x", start[i]);
    printf("\n");
}

/*Main function to call above function for 0x01234567*/
int main()
{
   int i = 0x01234567;
   show_mem_rep((char *)&i, sizeof(i));
   getchar();
   return 0;
}
Run on IDE
When above program is run on little endian machine, gives “67 45 23 01″ as output , while if it is run on endian machine, gives “01 23 45 67″ as output.

Is there a quick way to determine endianness of your machine?
There are n no. of ways for determining endianness of your machine. Here is one quick way of doing the same.

#include <stdio.h>
int main()
{
   unsigned int i = 1;
   char *c = (char*)&i;
   if (*c)   
       printf("Little endian");
   else
       printf("Big endian");
   getchar();
   return 0;
}
Run on IDE
In the above program, a character pointer c is pointing to an integer i. Since size of character is 1 byte when the character pointer is de-referenced it will contain only first byte of integer. If machine is little endian then *c will be 1 (because last byte is stored first) and if machine is big endian then *c will be 0.

Does endianness matter for programmers?
Most of the times compiler takes care of endianness, however, endianness becomes an issue in following cases.

It matters in network programming: Suppose you write integers to file on a little endian machine and you transfer this file to a big endian machine. Unless there is little andian to big endian transformation, big endian machine will read the file in reverse order. You can find such a practical example here.

Standard byte order for networks is big endian, also known as network byte order. Before transferring data on network, data is first converted to network byte order (big endian).

Sometimes it matters when you are using type casting, below program is an example.

#include <stdio.h>
int main()
{
    unsigned char arr[2] = {0x01, 0x00};
    unsigned short int x = *(unsigned short int *) arr;
    printf("%d", x);
    getchar();
    return 0;
}
Run on IDE
In the above program, a char array is typecasted to an unsigned short integer type. When I run above program on little endian machine, I get 1 as output, while if I run it on a big endian machine I get 256. To make programs endianness independent, above programming style should be avoided.

What are bi-endians?
Bi-endian processors can run in both modes little and big endian.

What are the examples of little, big endian and bi-endian machines ?
Intel based processors are little endians. ARM processors were little endians. Current generation ARM processors are bi-endian.

Motorola 68K processors are big endians. PowerPC (by Motorola) and SPARK (by Sun) processors were big endian. Current version of these processors are bi-endians.

Does endianness effects file formats?
File formats which have 1 byte as a basic unit are independent of endianness e..g., ASCII files . Other file formats use some fixed endianness forrmat e.g, JPEG files are stored in big endian format.


Which one is better — little endian or big endian
The term little and big endian came from Gulliver’s Travels by Jonathan Swift. Two groups could not agree by which end a egg should be opened -a-the little or the big. Just like the egg issue, there is no technological reason to choose one byte ordering convention over the other, hence the arguments degenerate into bickering about sociopolitical issues. As long as one of the conventions is selected and adhered to consistently, the choice is arbitrary.

===================================
program to find a machine is 32bit or 64:
int main(void){
     switch(sizeof(void*)){
        case 4: printf("32\n");
        break;
        case 8: printf("64\n");
        break;
    }
}
=====================================================


Hashing | Set 1 (Introduction)
------------------------------

Suppose we want to design a system for storing employee records keyed using phone numbers. And we want following queries to be performed efficiently:

    Insert a phone number and corresponding information.
    Search a phone number and fetch the information.
    Delete a phone number and related information.

We can think of using the following data structures to maintain information about different phone numbers.

    Array of phone numbers and records.
    Linked List of phone numbers and records.
    Balanced binary search tree with phone numbers as keys.
    Direct Access Table.

For arrays and linked lists, we need to search in a linear fashion, which can be costly in practice. If we use arrays and keep the data sorted, then a phone number can be searched in O(Logn) time using Binary Search, but insert and delete operations become costly as we have to maintain sorted order.

With balanced binary search tree, we get moderate search, insert and delete times. All of these operations can be guaranteed to be in O(Logn) time.

Another solution that one can think of is to use a direct access table where we make a big array and use phone numbers as index in the array. An entry in array is NIL if phone number is not present, else the array entry stores pointer to records corresponding to phone number. Time complexity wise this solution is the best among all, we can do all operations in O(1) time. For example to insert a phone number, we create a record with details of given phone number, use phone number as index and store the pointer to the created record in table.
This solution has many practical limitations. First problem with this solution is extra space required is huge. For example if phone number is n digits, we need O(m * 10n) space for table where m is size of a pointer to record. Another problem is an integer in a programming language may not store n digits.

Due to above limitations Direct Access Table cannot always be used. Hashing is the solution that can be used in almost all such situations and performs extremely well compared to above data structures like Array, Linked List, Balanced BST in practice. With hashing we get O(1) search time on average (under reasonable assumptions) and O(n) in worst case.

Hashing is an improvement over Direct Access Table. The idea is to use hash function that converts a given phone number or any other key to a smaller number and uses the small number as index in a table called hash table.

Hash Function: A function that converts a given big phone number to a small practical integer value. The mapped integer value is used as an index in hash table. In simple terms, a hash function maps a big number or string to a small integer that can be used as index in hash table.
A good hash function should have following properties
1) Efficiently computable.
2) Should uniformly distribute the keys (Each table position equally likely for each key)

For example for phone numbers a bad hash function is to take first three igits. A better function is consider last three digits. Please note that this may not be the best hash function. There may be better ways.

Hash Table: An array that stores pointers to records corresponding to a given phone number. An entry in hash table is NIL if no existing phone number has hash function value equal to the index for the entry.

Collision Handling: Since a hash function gets us a small number for a key which is a big integer or string, there is possibility that two keys result in same value. The situation where a newly inserted key maps to an already occupied slot in hash table is called collision and must be handled using some collision handling technique. Following are the ways to handle collisions:

    Chaining:The idea is to make each cell of hash table point to a linked list of records that have same hash function value. Chaining is simple, but requires additional memory outside the table.
    Open Addressing: In open addressing, all elements are stored in the hash table itself. Each table entry contains either a record or NIL. When searching for an element, we one by one examine table slots until the desired element is found or it is clear that the element is not in the table.

   
===========================================================================================================
Find whether an array is subset of another array:
-------------------------------------------------
Given two arrays: arr1[0..m-1] and arr2[0..n-1]. Find whether arr2[] is a subset of arr1[] or not. Both the arrays are not in sorted order. It may be assumed that elements in both array are distinct.

Examples:
Input: arr1[] = {11, 1, 13, 21, 3, 7}, arr2[] = {11, 3, 7, 1}
Output: arr2[] is a subset of arr1[]

Input: arr1[] = {1, 2, 3, 4, 5, 6}, arr2[] = {1, 2, 4}
Output: arr2[] is a subset of arr1[]

Input: arr1[] = {10, 5, 2, 23, 19}, arr2[] = {19, 5, 3}
Output: arr2[] is not a subset of arr1[]

Method 1 (Simple)
Use two loops: The outer loop picks all the elements of arr2[] one by one. The inner loop linearly searches for the element picked by outer loop. If all elements are found then return 1, else return 0.

ime Complexity: O(m*n)

Method 2 (Use Sorting and Binary Search)
---------------------------------------
1) Sort arr1[] O(mLogm)
2) For each element of arr2[], do binary search for it in sorted arr1[].
         a) If the element is not found then return 0.
3) If all elements are present then return 1.

Time Complexity: O(mLogm + nLogm). Please note that this will be the complexity if an mLogm algorithm is used for sorting which is not the case in above code. In above code Quick Sort is sued and worst case time complexity of Quick Sort is O(m^2)

Method 3 (Use Sorting and Merging )
------------------------------------
1) Sort both arrays: arr1[] and arr2[] O(mLogm + nLogn)
2) Use Merge type of process to see if all elements of sorted arr2[] are present in sorted arr1[].

Thanks to Parthsarthi for suggesting this method.
/* Return 1 if arr2[] is a subset of arr1[] */
bool isSubset(int arr1[], int arr2[], int m, int n)
{
    int i = 0, j = 0;
    
    if(m < n)
       return 0;

    quickSort(arr1, 0, m-1);
    quickSort(arr2, 0, n-1);
    while( i < n && j < m )
    {
        if( arr1[j] <arr2[i] )
            j++;
        else if( arr1[j] == arr2[i] )
        {
            j++;
            i++;
        }
        else if( arr1[j] > arr2[i] )
            return 0;
    }
 
    if( i < n )
        return 0;
    else
        return 1;
}

Time Complexity: O(mLogm + nLogn) which is better than method 2. Please note that this will be the complexity if an nLogn algorithm is used for sorting both arrays which is not the case in above code. In above code Quick Sort is sued and worst case time complexity of Quick Sort is O(n^2)

--------------------------------------------------------------------

Method 4 (Use Hashing)
---------------------
1) Create a Hash Table for all the elements of arr1[].
2) Traverse arr2[] and search for each element of arr2[] in the Hash Table. If element is not found then return 0.
3) If all elements are found then return 1.

========================================================================================================================================

Given an array A[] and a number x, check for pair in A[] with sum as x:
-------------------------------------------------------------------------

Write a C program that, given an array A[] of n numbers and another number x, determines whether or not there exist two elements in S whose sum is exactly x
METHOD 1 (Use Sorting)

Algorithm:

hasArrayTwoCandidates (A[], ar_size, sum)
1) Sort the array in non-decreasing order.
2) Initialize two index variables to find the candidate
   elements in the sorted array.
       (a) Initialize first to the leftmost index: l = 0
       (b) Initialize second  the rightmost index:  r = ar_size-1
3) Loop while l < r.
       (a) If (A[l] + A[r] == sum)  then return 1
       (b) Else if( A[l] + A[r] <  sum )  then l++
       (c) Else r--   
4) No candidates in whole array - return 0

Time Complexity: Depends on what sorting algorithm we use. If we use Merge Sort or Heap Sort then (-)(nlogn) in worst case. If we use Quick Sort then O(n^2) in worst case.
Auxiliary Space : Again, depends on sorting algorithm. For example auxiliary space is O(n) for merge sort and O(1) for Heap Sort.

Example:
Let Array be {1, 4, 45, 6, 10, -8} and sum to find be 16

Sort the array
A = {-8, 1, 4, 6, 10, 45}

Initialize l = 0, r = 5
A[l] + A[r] ( -8 + 45) > 16 => decrement r. Now r = 10
A[l] + A[r] ( -8 + 10) < 2 => increment l. Now l = 1
A[l] + A[r] ( 1 + 10) < 16 => increment l. Now l = 2
A[l] + A[r] ( 4 + 10) < 14 => increment l. Now l = 3
A[l] + A[r] ( 6 + 10) == 16 => Found candidates (return 1)

Note: If there are more than one pair having the given sum then this algorithm reports only one. Can be easily extended for this though.
-----------------------------------------------------------

METHOD 2 (Use Hash Map)
Thanks to Bindu for suggesting this method and thanks to Shekhu for providing code.
This method works in O(n) time if range of numbers is known.
Let sum be the given sum and A[] be the array in which we need to find pair.

1) Initialize Binary Hash Map M[] = {0, 0, ...}
2) Do following for each element A[i] in A[]
   (a)    If M[x - A[i]] is set then print the pair (A[i], x - A[i])
   (b)    Set M[A[i]]

   #include <stdio.h>
#define MAX 100000

void printPairs(int arr[], int arr_size, int sum)
{
  int i, temp;
  bool binMap[MAX] = {0}; /*initialize hash map as 0*/

  for (i = 0; i < arr_size; i++)
  {
      temp = sum - arr[i];
      if (temp >= 0 && binMap[temp] == 1)
         printf("Pair with given sum %d is (%d, %d) \n",
                 sum, arr[i], temp);
      binMap[arr[i]] = 1;
  }
}

/* Driver program to test above function */
int main()
{
    int A[] = {1, 4, 45, 6, 10, 8};
    int n = 16;
    int arr_size = sizeof(A)/sizeof(A[0]);

    printPairs(A, arr_size, n);

    getchar();
    return 0;
}

Time Complexity: O(n)
Output:

Pair with given sum 16 is (10, 6)

Auxiliary Space: O(R) where R is range of integers.

If range of numbers include negative numbers then also it works. All we have to do for negative numbers is to make everything positive by adding the absolute value of smallest negative integer to all numbers.

=============================================================================================================================================
Majority Element:-
----------------------
Majority Element: A majority element in an array A[] of size n is an element that appears more than n/2 times (and hence there is at most one such element).

Write a function which takes an array and emits the majority element (if it exists), otherwise prints NONE as follows:

       I/P : 3 3 4 2 4 4 2 4 4
       O/P : 4

       I/P : 3 3 4 2 4 4 2 4
       O/P : NONE

METHOD 1 (Basic)
The basic solution is to have two loops and keep track of maximum count for all different elements. If maximum count becomes greater than n/2 then break the loops and return the element having maximum count. If maximum count doesn’t become more than n/2 then majority element doesn’t exist.

Time Complexity: O(n*n).
Auxiliary Space : O(1).



METHOD 2 (Using Binary Search Tree)
Thanks to Sachin Midha for suggesting this solution.

Node of the Binary Search Tree (used in this approach) will be as follows.
struct tree
{
  int element;
  int count;
}BST;

Insert elements in BST one by one and if an element is already present then increment the count of the node. At any stage, if count of a node becomes more than n/2 then return.
The method works well for the cases where n/2+1 occurrences of the majority element is present in the starting of the array, for example {1, 1, 1, 1, 1, 2, 3, 4}.

Time Complexity: If a binary search tree is used then time complexity will be O(n^2). If a self-balancing-binary-search tree is used then O(nlogn)
Auxiliary Space: O(n)


Method3:-
Using hashing.

if range is known, create a hash table and take input number as a key and its frequency as a value, and increment its frequency if same number comes and maintain max at the same time. compare the max with n/2 and return at the last.

space complexity: o(R) if range is known
Time complexity: O(n).

======================================================================================================================

Find the Missing Number
-------------------

You are given a list of n-1 integers and these integers are in the range of 1 to n. There are no duplicates in list. One of the integers is missing in the list. Write an efficient code to find the missing integer.

Example:
I/P    [1, 2, 4, ,6, 3, 7, 8]
O/P    5



METHOD 1(Use sum formula)
Algorithm:

1. Get the sum of numbers
       total = n*(n+1)/2
2  Subtract all the numbers from sum and
   you will get the missing number.

Program:
#include<stdio.h>

/* getMissingNo takes array and size of array as arguments*/
int getMissingNo (int a[], int n)
{
    int i, total;
    total  = (n+1)*(n+2)/2;  
    for ( i = 0; i< n; i++)
       total -= a[i];
    return total;
}

/*program to test above function */
int main()
{
    int a[] = {1,2,4,5,6};
    int miss = getMissingNo(a,5);
    printf("%d", miss);
    getchar();
}

Time Complexity: O(n)

============================================================================================
Program for array rotation:-
-------------------------------
METHOD 1 (Use temp array)

Input arr[] = [1, 2, 3, 4, 5, 6, 7], d = 2, n =7
1) Store d elements in a temp array
   temp[] = [1, 2]
2) Shift rest of the arr[]
   arr[] = [3, 4, 5, 6, 7, 6, 7]
3) Store back the d elements
   arr[] = [3, 4, 5, 6, 7, 1, 2]

Time complexity O(n)
Auxiliary Space: O(d)



METHOD 2 (Rotate one by one)

leftRotate(arr[], d, n)
start
  For i = 0 to i < d
    Left rotate all elements of arr[] by one
end

To rotate by one, store arr[0] in a temporary variable temp, move arr[1] to arr[0], arr[2] to arr[1] …and finally temp to arr[n-1]
Time complexity: O(n*d)
Auxiliary Space: O(1)

==================================================================================================================================================================

Find the smallest and second smallest element in an array

Write an efficient C program to find smallest and second smallest element in an array.

Example:

Input:  arr[] = {12, 13, 1, 10, 34, 1}
Output: The smallest element is 1 and
        second Smallest element is 10

A Simple Solution is to sort the array in increasing order. The first two elements in sorted array would be two smallest elements. Time complexity of this solution is O(n Log n).

A Better Solution is to scan the array twice. In first traversal find the minimum element. Let this element be x. In second traversal, find the smallest element greater than x. Time complexity of this solution is O(n).

The above solution requires two traversals of input array.

An Efficient Solution can find the minimum two elements in one traversal. Below is complete algorithm.

Algorithm:

1) Initialize both first and second smallest as INT_MAX
   first = second = INT_MAX
2) Loop through all the elements.
   a) If the current element is smaller than first, then update first
       and second.
   b) Else if the current element is smaller than second then update
    second
   
============================================================================================================

Two elements whose sum is closest to zero

Question: An Array of integers is given, both +ve and -ve. You need to find the two elements such that their sum is closest to zero.

For the below array, program should print -80 and 85.

METHOD 1 (Simple)
For each element, find the sum of it with every other element in the array and compare sums. Finally, return the minimum sum.

Time complexity: O(n^2)



METHOD 2 (Use Sorting)
Thanks to baskin for suggesting this approach. We recommend to read this post for background of this approach.

Algorithm
1) Sort all the elements of the input array.
2) Use two index variables l and r to traverse from left and right ends respectively. Initialize l as 0 and r as n-1.
3) sum = a[l] + a[r]
4) If sum is -ve, then l++
5) If sum is +ve, then r–
6) Keep track of abs min sum.
7) Repeat steps 3, 4, 5 and 6 while l < r Implementation

time complexity o(nlogn)
=======================================================================================

Maximum sum such that no two elements are adjacent

calculate sum of numbers placed at odd indexes and sum of numbers placed at even indexes.
return (MAX(odd_sum, even_sum))

========================================================================================


Segregate 0s and 1s in an array

Asked by kapil.

You are given an array of 0s and 1s in random order. Segregate 0s on left side and 1s on right side of the array. Traverse array only once.

Input array   =  [0, 1, 0, 1, 0, 0, 1, 1, 1, 0]
Output array =  [0, 0, 0, 0, 0, 1, 1, 1, 1, 1]

Method 1 (Count 0s or 1s)
Thanks to Naveen for suggesting this method.
1) Count the number of 0s. Let count be C.
2) Once we have count, we can put C 0s at the beginning and 1s at the remaining n – C positions in array.

Time Complexity: O(n)

The method 1 traverses the array two times. Method 2 does the same in a single pass.



Method 2 (Use two indexes to traverse)
Maintain two indexes. Initialize first index left as 0 and second index right as n-1.

Do following while left < right
a) Keep incrementing index left while there are 0s at it
b) Keep decrementing index right while there are 1s at it
c) If left < right then exchange arr[left] and arr[right] Implementation:
// C program to sort a binary array in one pass
#include<stdio.h>

/*Function to put all 0s on left and all 1s on right*/
void segregate0and1(int arr[], int size)
{
    /* Initialize left and right indexes */
    int left = 0, right = size-1;

    while (left < right)
    {
        /* Increment left index while we see 0 at left */
        while (arr[left] == 0 && left < right)
            left++;

        /* Decrement right index while we see 1 at right */
        while (arr[right] == 1 && left < right)
            right–;

        /* If left is smaller than right then there is a 1 at left
          and a 0 at right.  Exchange arr[left] and arr[right]*/
        if (left < right)
        {
            arr[left] = 0;
            arr[right] = 1;
            left++;
            right–;
        }
    }
}

/* driver program to test */
int main()
{
    int arr[] = {0, 1, 0, 1, 1, 1};
    int i, arr_size = sizeof(arr)/sizeof(arr[0]);

    segregate0and1(arr, arr_size);

    printf("array after segregation ");
    for (i = 0; i < 6; i++)
        printf("%d ", arr[i]);

    getchar();
    return 0;
}

Time Complexity: O(n)


================================================



Implement Stack using Queues

The problem is opposite of this post. We are given a Queue data structure that supports standard operations like enqueue() and dequeue(). We need to implement a Stack data structure using only instances of Queue.

A stack can be implemented using two queues. Let stack to be implemented be ‘s’ and queues used to implement be ‘q1′ and ‘q2′. Stack ‘s’ can be implemented in two ways:

Method 1 (By making push operation costly)
This method makes sure that newly entered element is always at the front of ‘q1′, so that pop operation just dequeues from ‘q1′. ‘q2′ is used to put every new element at front of ‘q1′.

push(s, x) // x is the element to be pushed and s is stack
  1) Enqueue x to q2
  2) One by one dequeue everything from q1 and enqueue to q2.
  3) Swap the names of q1 and q2
// Swapping of names is done to avoid one more movement of all elements
// from q2 to q1.

pop(s)
  1) Dequeue an item from q1 and return it.

Method 2 (By making pop operation costly)
In push operation, the new element is always enqueued to q1. In pop() operation, if q2 is empty then all the elements except the last, are moved to q2. Finally the last element is dequeued from q1 and returned.

push(s,  x)
  1) Enqueue x to q1 (assuming size of q1 is unlimited).

pop(s) 
  1) One by one dequeue everything except the last element from q1 and enqueue to q2.
  2) Dequeue the last item of q1, the dequeued item is result, store it.
  3) Swap the names of q1 and q2
  4) Return the item stored in step 2.
// Swapping of names is done to avoid one more movement of all elements
// from q2 to q1.


===========================================================================

Implement Queue using Stacks

A queue can be implemented using two stacks. Let queue to be implemented be q and stacks used to implement q be stack1 and stack2. q can be implemented in two ways:

Method 1 (By making enQueue operation costly)
This method makes sure that newly entered element is always at the top of stack 1, so that deQueue operation just pops from stack1. To put the element at top of stack1, stack2 is used.

enQueue(q, x)
  1) While stack1 is not empty, push everything from satck1 to stack2.
  2) Push x to stack1 (assuming size of stacks is unlimited).
  3) Push everything back to stack1.

dnQueue(q)
  1) If stack1 is empty then error
  2) Pop an item from stack1 and return it

Method 2 (By making deQueue operation costly)
In this method, in en-queue operation, the new element is entered at the top of stack1. In de-queue operation, if stack2 is empty then all the elements are moved to stack2 and finally top of stack2 is returned.

enQueue(q,  x)
  1) Push x to stack1 (assuming size of stacks is unlimited).

deQueue(q)
  1) If both stacks are empty then error.
  2) If stack2 is empty
       While stack1 is not empty, push everything from satck1 to stack2.
  3) Pop the element from stack2 and return it.

Method 2 is definitely better than method 1. Method 1 moves all the elements twice in enQueue operation, while method 2 (in deQueue operation) moves the elements once and moves elements only if stack2 empty.

Implementation of method 2:









#######
Tree:-#
#######

Binary Search Tree | Set 2 (Delete)

We have discussed BST search and insert operations. In this post, delete operation is discussed. When we delete a node, there possibilities arise.

1) Node to be deleted is leaf: Simply remove from the tree.

              50                            50
           /     \         delete(20)      /   \
          30      70       --------->    30     70
         /  \    /  \                     \    /  \
       20   40  60   80                   40  60   80

2) Node to be deleted has only one child: Copy the child to the node and delete the child

              50                            50
           /     \         delete(30)      /   \
          30      70       --------->    40     70
            \    /  \                          /  \
            40  60   80                       60   80

3) Node to be deleted has two children: Find inorder successor of the node. Copy contents of the inorder successor to the node and delete the inorder successor. Note that inorder predecessor can also be used.

              50                            60
           /     \         delete(50)      /   \
          40      70       --------->    40    70
                 /  \                            \
                60   80                           80

The important thing to note is, inorder successor is needed only when right child is not empty. In this particular case, inorder successor can be obtained by finding the minimum value in right child of the node.

// C program to demonstrate delete operation in binary search tree
#include<stdio.h>
#include<stdlib.h>

struct node
{
    int key;
    struct node *left, *right;
};

// A utility function to create a new BST node
struct node *newNode(int item)
{
    struct node *temp =  (struct node *)malloc(sizeof(struct node));
    temp->key = item;
    temp->left = temp->right = NULL;
    return temp;
}

// A utility function to do inorder traversal of BST
void inorder(struct node *root)
{
    if (root != NULL)
    {
        inorder(root->left);
        printf("%d ", root->key);
        inorder(root->right);
    }
}

/* A utility function to insert a new node with given key in BST */
struct node* insert(struct node* node, int key)
{
    /* If the tree is empty, return a new node */
    if (node == NULL) return newNode(key);

    /* Otherwise, recur down the tree */
    if (key < node->key)
        node->left  = insert(node->left, key);
    else
        node->right = insert(node->right, key);

    /* return the (unchanged) node pointer */
    return node;
}

/* Given a non-empty binary search tree, return the node with minimum
   key value found in that tree. Note that the entire tree does not
   need to be searched. */
struct node * minValueNode(struct node* node)
{
    struct node* current = node;

    /* loop down to find the leftmost leaf */
    while (current->left != NULL)
        current = current->left;

    return current;
}

/* Given a binary search tree and a key, this function deletes the key
   and returns the new root */
struct node* deleteNode(struct node* root, int key)
{
    // base case
    if (root == NULL) return root;

    // If the key to be deleted is smaller than the root's key,
    // then it lies in left subtree
    if (key < root->key)
        root->left = deleteNode(root->left, key);

    // If the key to be deleted is greater than the root's key,
    // then it lies in right subtree
    else if (key > root->key)
        root->right = deleteNode(root->right, key);

    // if key is same as root's key, then This is the node
    // to be deleted
    else
    {
        // node with only one child or no child
        if (root->left == NULL)
        {
            struct node *temp = root->right;
            free(root);
            return temp;
        }
        else if (root->right == NULL)
        {
            struct node *temp = root->left;
            free(root);
            return temp;
        }

        // node with two children: Get the inorder successor (smallest
        // in the right subtree)
        struct node* temp = minValueNode(root->right);

        // Copy the inorder successor's content to this node
        root->key = temp->key;

        // Delete the inorder successor
        root->right = deleteNode(root->right, temp->key);
    }
    return root;
}

// Driver Program to test above functions
int main()
{
    /* Let us create following BST
              50
           /     \
          30      70
         /  \    /  \
       20   40  60   80 */
    struct node *root = NULL;
    root = insert(root, 50);
    root = insert(root, 30);
    root = insert(root, 20);
    root = insert(root, 40);
    root = insert(root, 70);
    root = insert(root, 60);
    root = insert(root, 80);

    printf("Inorder traversal of the given tree \n");
    inorder(root);

    printf("\nDelete 20\n");
    root = deleteNode(root, 20);
    printf("Inorder traversal of the modified tree \n");
    inorder(root);

    printf("\nDelete 30\n");
    root = deleteNode(root, 30);
    printf("Inorder traversal of the modified tree \n");
    inorder(root);

    printf("\nDelete 50\n");
    root = deleteNode(root, 50);
    printf("Inorder traversal of the modified tree \n");
    inorder(root);

    return 0;
}

Output:

Inorder traversal of the given tree
20 30 40 50 60 70 80
Delete 20
Inorder traversal of the modified tree
30 40 50 60 70 80
Delete 30
Inorder traversal of the modified tree
40 50 60 70 80
Delete 50
Inorder traversal of the modified tree
40 60 70 80

Time Complexity: The worst case time complexity of delete operation is O(h) where h is height of Binary Search Tree. In worst case, we may have to travel from root to the deepest leaf node. The height of a skewed tree may become n and the time complexity of delete operation may become O(n)

====================================================================================================


Inorder predecessor and successor for a given key in BST

I recently encountered with a question in an interview at e-commerce company. The interviewer asked the following question:

There is BST given with root node with key part as integer only. The structure of each node is as follows:
struct Node
{
    int key;
    struct Node *left, *right ;
};

You need to find the inorder successor and predecessor of a given key. In case the given key is not found in BST, then return the two values within which this key will lie.

Following is the algorithm to reach the desired result. Its a recursive method:

Input: root node, key
output: predecessor node, successor node

1. If root is NULL
      then return
2. if key is found then
    a. If its left subtree is not null
        Then predecessor will be the right most
        child of left subtree or left child itself.
    b. If its right subtree is not null
        The successor will be the left most child
        of right subtree or right child itself.
    return
3. If key is smaller then root node
        set the successor as root
        search recursively into left subtree
    else
        set the predecessor as root
        search recursively into right subtree

Following is C++ implementation of the above algorithm:
// C++ program to find predecessor and successor in a BST
#include <iostream>
using namespace std;

// BST Node
struct Node
{
    int key;
    struct Node *left, *right;
};

// This function finds predecessor and successor of key in BST.
// It sets pre and suc as predecessor and successor respectively
void findPreSuc(Node* root, Node*& pre, Node*& suc, int key)
{
    // Base case
    if (root == NULL)  return ;

    // If key is present at root
    if (root->key == key)
    {
        // the maximum value in left subtree is predecessor
        if (root->left != NULL)
        {
            Node* tmp = root->left;
            while (tmp->right)
                tmp = tmp->right;
            pre = tmp ;
        }

        // the minimum value in right subtree is successor
        if (root->right != NULL)
        {
            Node* tmp = root->right ;
            while (tmp->left)
                tmp = tmp->left ;
            suc = tmp ;
        }
        return ;
    }

    // If key is smaller than root's key, go to left subtree
    if (root->key > key)
    {
        suc = root ;
        findPreSuc(root->left, pre, suc, key) ;
    }
    else // go to right subtree
    {
        pre = root ;
        findPreSuc(root->right, pre, suc, key) ;
    }
}

// A utility function to create a new BST node
Node *newNode(int item)
{
    Node *temp =  new Node;
    temp->key = item;
    temp->left = temp->right = NULL;
    return temp;
}

/* A utility function to insert a new node with given key in BST */
Node* insert(Node* node, int key)
{
    if (node == NULL) return newNode(key);
    if (key < node->key)
        node->left  = insert(node->left, key);
    else
        node->right = insert(node->right, key);
    return node;
}

// Driver program to test above function
int main()
{
    int key = 65;    //Key to be searched in BST

   /* Let us create following BST
              50
           /     \
          30      70
         /  \    /  \
       20   40  60   80 */
    Node *root = NULL;
    root = insert(root, 50);
    insert(root, 30);
    insert(root, 20);
    insert(root, 40);
    insert(root, 70);
    insert(root, 60);
    insert(root, 80);


    Node* pre = NULL, *suc = NULL;

    findPreSuc(root, pre, suc, key);
    if (pre != NULL)
      cout << "Predecessor is " << pre->key << endl;
    else
      cout << "No Predecessor";

    if (suc != NULL)
      cout << "Successor is " << suc->key;
    else
      cout << "No Successor";
    return 0;
}

Output:

Predecessor is 60
Successor is 70

================================================================================================================


A program to check if a binary tree is BST or not

A binary search tree (BST) is a node based binary tree data structure which has the following properties.
• The left subtree of a node contains only nodes with keys less than the node’s key.
• The right subtree of a node contains only nodes with keys greater than the node’s key.
• Both the left and right subtrees must also be binary search trees.

From the above properties it naturally follows that:
• Each node (item in the tree) has a distinct key.

BST



METHOD 1 (Simple but Wrong)
Following is a simple program. For each node, check if left node of it is smaller than the node and right node of it is greater than the node.
int isBST(struct node* node)
{
  if (node == NULL)
    return 1;
    
  /* false if left is > than node */
  if (node->left != NULL && node->left->data > node->data)
    return 0;
    
  /* false if right is < than node */
  if (node->right != NULL && node->right->data < node->data)
    return 0;
  
  /* false if, recursively, the left or right is not a BST */
  if (!isBST(node->left) || !isBST(node->right))
    return 0;
    
  /* passing all that, it's a BST */
  return 1;
}

This approach is wrong as this will return true for below binary tree (and below tree is not a BST because 4 is in left subtree of 3)

tree_bst






METHOD 2 (Correct but not efficient)
For each node, check if max value in left subtree is smaller than the node and min value in right subtree greater than the node.
/* Returns true if a binary tree is a binary search tree */
int isBST(struct node* node)
{
  if (node == NULL)
    return(true);
    
  /* false if the max of the left is > than us */
  if (node->left!=NULL && maxValue(node->left) > node->data)
    return(false);
    
  /* false if the min of the right is <= than us */
  if (node->right!=NULL && minValue(node->right) < node->data)
    return(false);
  
  /* false if, recursively, the left or right is not a BST */
  if (!isBST(node->left) || !isBST(node->right))
    return(false);
    
  /* passing all that, it's a BST */
  return(true);
}

It is assumed that you have helper functions minValue() and maxValue() that return the min or max int value from a non-empty tree



METHOD 3 (Correct and Efficient)
Method 2 above runs slowly since it traverses over some parts of the tree many times. A better solution looks at each node only once. The trick is to write a utility helper function isBSTUtil(struct node* node, int min, int max) that traverses down the tree keeping track of the narrowing min and max allowed values as it goes, looking at each node only once. The initial values for min and max should be INT_MIN and INT_MAX — they narrow from there.

/* Returns true if the given tree is a binary search tree
 (efficient version). */
int isBST(struct node* node)
{
  return(isBSTUtil(node, INT_MIN, INT_MAX));
}

/* Returns true if the given tree is a BST and its
 values are >= min and <= max. */
int isBSTUtil(struct node* node, int min, int max)

Implementation:

#include <stdio.h>
#include <stdlib.h>
#include <limits.h>

/* A binary tree node has data, pointer to left child
   and a pointer to right child */
struct node
{
    int data;
    struct node* left;
    struct node* right;
};

int isBSTUtil(struct node* node, int min, int max);

/* Returns true if the given tree is a binary search tree
 (efficient version). */
int isBST(struct node* node)
{
  return(isBSTUtil(node, INT_MIN, INT_MAX));
}

/* Returns true if the given tree is a BST and its
   values are >= min and <= max. */
int isBSTUtil(struct node* node, int min, int max)
{

  /* an empty tree is BST */
  if (node==NULL)
     return 1;
      
  /* false if this node violates the min/max constraint */
  if (node->data < min || node->data > max)
     return 0;

  /* otherwise check the subtrees recursively,
   tightening the min or max constraint */
  return
    isBSTUtil(node->left, min, node->data-1) &&  // Allow only distinct values
    isBSTUtil(node->right, node->data+1, max);  // Allow only distinct values
}

/* Helper function that allocates a new node with the
   given data and NULL left and right pointers. */
struct node* newNode(int data)
{
  struct node* node = (struct node*)
                       malloc(sizeof(struct node));
  node->data = data;
  node->left = NULL;
  node->right = NULL;

  return(node);
}

/* Driver program to test above functions*/
int main()
{
  struct node *root = newNode(4);
  root->left        = newNode(2);
  root->right       = newNode(5);
  root->left->left  = newNode(1);
  root->left->right = newNode(3);

  if(isBST(root))
    printf("Is BST");
  else
    printf("Not a BST");
    
  getchar();
  return 0;


Time Complexity: O(n)
Auxiliary Space : O(1) if Function Call Stack size is not considered, otherwise O(n)

METHOD 4(Using In-Order Traversal)
Thanks to LJW489 for suggesting this method.
1) Do In-Order Traversal of the given tree and store the result in a temp array.
3) Check if the temp array is sorted in ascending order, if it is, then the tree is BST.

Time Complexity: O(n)

We can avoid the use of Auxiliary Array. While doing In-Order traversal, we can keep track of previously visited node. If the value of the currently visited node is less than the previous value, then tree is not BST. Thanks to ygos for this space optimization.

bool isBST(struct node* root)
{
    static struct node *prev = NULL;
    
    // traverse the tree in inorder fashion and keep track of prev node
    if (root)
    {
        if (!isBST(root->left))
          return false;

        // Allows only distinct valued nodes
        if (prev != NULL && root->data <= prev->data)
          return false;

        prev = root;

        return isBST(root->right);
    }

    return true;
}

=====================================================================

Find k-th smallest element in BST (Order Statistics in BST)
-----------------------------------------------------------
Method 1: Using Inorder Traversal.

Inorder traversal of BST retrieves elements of tree in the sorted order. The inorder traversal uses stack to store to be explored nodes of tree (threaded tree avoids stack and recursion for traversal, see this post). The idea is to keep track of popped elements which participate in the order statics. Hypothetical algorithm is provided below,

Time complexity: O(n) where n is total nodes in tree..

Method 2: Augmented  Tree Data Structure.

The idea is to maintain rank of each node. We can keep track of elements in a subtree of any node while building the tree. Since we need K-th smallest element, we can maintain number of elements of left subtree in every node.

Assume that the root is having N nodes in its left subtree. If K = N + 1, root is K-th node. If K < N, we will continue our search (recursion) for the Kth smallest element in the left subtree of root. If K > N + 1, we continue our search in the right subtree for the (K – N – 1)-th smallest element. Note that we need the count of elements in left subtree only.

Time complexity: O(h) where h is height of tree.
===============================================================================



====================================================================================
There are 3 types of access modifiers in C++ namely Public, Protected and Private.

Public :- All the variables or functions declared as public can be accessed by anyone - object of the same class, object of the inheriting class as well as object of any other class.

Protected :- All the variables and functions can be accessed only by object of the same class and object of the inheriting class.

Private :- All the variables and functions can be accessed only by the object of the same class.

Hope it helped :) !
========================================================================================
          
======================================================================================================
What are main features of OOP?
Encapsulation
Polymorphism
Inheritance

What is encapsulation?
Encapsulation is referred to one of the following two notions.
1) Data hiding: A language feature to restrict access to members of an object. For example, private and protected members in C++.
2) Bundling of data and methods together: Data and methods that operate on that data are bundled together.

What is Polymorphism? How is it supported by C++?
Polymorphism means that some code or operations or objects behave differently in different contexts. In C++,  following features support polymorphism.

Compile Time Polymorphism: Compile time polymorphism means compiler knows which function should be called when a polymorphic call is made.  C++ supports compiler time polymorphism by supporting features like templates, function overloading and default arguments.

Run Time Polymorphism: Run time polymorphism is supported by virtual functions. The idea is, virtual functions are called according to the type of object pointed or referred, not according to the type of pointer or reference. In other words, virtual functions are resolved late, at runtime.

What is Inheritance? What is the purpose?
The idea of inheritance is simple, a class is based on another class and uses data and implementation of the other class.
The purpose of inheritance is Code Reuse.

====================================================
Pure Virtual Functions and Abstract Classes in C++
====================================================

Sometimes implementation of all function cannot be provided in a base class because we don’t know the implementation. Such a class is called abstract class. For example, let Shape be a base class. We cannot provide implementation of function draw() in Shape, but we know every derived class must have implementation of draw(). Similarly an Animal class doesn’t have implementation of move() (assuming that all animals move), but all animals must know how to move. We cannot create objects of abstract classes.

A pure virtual function (or abstract function) in C++ is a virtual function for which we don’t have implementation, we only declare it. A pure virtual function is declared by assigning 0 in declaration. See the following example.
// An abstract class
class Test
{  
    // Data members of class
public:
    // Pure Virtual Function
    virtual void show() = 0;
  
   /* Other members */
};

A complete example:
A pure virtual function is implemented by classes which are derived from a Abstract class. Following is a simple example to demonstrate the same.
#include<iostream>
using namespace std;

class Base
{
   int x;
public:
    virtual void fun() = 0;
    int getX() { return x; }
};

// This class ingerits from Base and implements fun()
class Derived: public Base
{
    int y;
public:
    void fun() { cout << "fun() called"; }
};

int main(void)
{
    Derived d;
    d.fun();
    return 0;
}

Output:

fun() called

======================================================================================================
Some Interesting Facts:
1) A class is abstract if it has at least one pure virtual function.
In the following example, Test is an abstract class because it has a pure virtual function show().
// pure virtual functions make a class abstract
#include<iostream>
using namespace std;

class Test
{
   int x;
public:
    virtual void show() = 0;
    int getX() { return x; }
};

int main(void)
{
    Test t;
    return 0;
}

Output:

Compiler Error: cannot declare variable 't' to be of abstract
 type 'Test' because the following virtual functions are pure
within 'Test': note:     virtual void Test::show()

2) We can have pointers and references of abstract class type.
For example the following program works fine.
#include<iostream>
using namespace std;

class Base
{
public:
    virtual void show() = 0;
};

class Derived: public Base
{
public:
    void show() { cout << "In Derived \n"; }
};

int main(void)
{
    Base *bp = new Derived();
    bp->show();
    return 0;
}

Output:

In Derived

3) If we do not override the pure virtual function in derived class, then derived class also becomes abstract class.
The following example demonstrates the same.
#include<iostream>
using namespace std;
class Base
{
public:
    virtual void show() = 0;
};

class Derived : public Base { };

int main(void)
{
  Derived d;
  return 0;
}

Compiler Error: cannot declare variable 'd' to be of abstract type
'Derived'  because the following virtual functions are pure within
'Derived': virtual void Base::show()

4) An abstract class can have constructors.
For example, the following program compiles and runs fine.
#include<iostream>
using namespace std;

// An abstract class with constructor
class Base
{
protected:
   int x;
public:
  virtual void fun() = 0;
  Base(int i) { x = i; }
};

class Derived: public Base
{
    int y;
public:
    Derived(int i, int j):Base(i) { y = j; }
    void fun() { cout << "x = " << x << ", y = " << y; }
};

int main(void)
{
    Derived d(4, 5);
    d.fun();
    return 0;
}

Output:

x = 4, y = 5
===========================================================================================================
Inline Functions in C++
===========================================================================================================
Inline function is one of the important feature of C++. So, let’s first understand why inline functions are used and what is the purpose of inline function?

When the program executes the function call instruction the CPU stores the memory address of the instruction following the function call, copies the arguments of the function on the stack and finally transfers control to the specified function. The CPU then executes the function code, stores the function return value in a predefined memory location/register and returns control to the calling function. This can become overhead if the execution time of function is less than the switching time from the caller function to called function (callee). For functions that are large and/or perform complex tasks, the overhead of the function call is usually insignificant compared to the amount of time the function takes to run. However, for small, commonly-used functions, the time needed to make the function call is often a lot more than the time needed to actually execute the function’s code. This overhead occurs for small functions because execution time of small function is less than the switching time.

C++ provides an inline functions to reduce the function call overhead. Inline function is a function that is expanded in line when it is called. When the inline function is called whole code of the inline function gets inserted or substituted at the point of inline function call. This substitution is performed by the C++ compiler at compile time. Inline function may increase efficiency if it is small.
The syntax for defining the function inline is:

inline return-type function-name(parameters)
{
    // function code


Remember, inlining is only a request to the compiler, not a command. Compiler can ignore the request for inlining. Compiler may not perform inlining in such circumstances like:
1) If a function contains a loop. (for, while, do-while)
2) If a function contains static variables.
3) If a function is recursive.
4) If a function return type is other than void, and the return statement doesn’t exist in function body.
5) If a function contains switch or goto statement.

Inline functions provide following advantages:
1) Function call overhead doesn’t occur.
2) It also saves the overhead of push/pop variables on the stack when function is called.
3) It also saves overhead of a return call from a function.
4) When you inline a function, you may enable compiler to perform context specific optimization on the body of function. Such optimizations are not possible for normal function calls. Other optimizations can be obtained by considering the flows of calling context and the called context.
5) Inline function may be useful (if it is small) for embedded systems because inline can yield less code than the function call preamble and return.

Inline function disadvantages:
1) The added variables from the inlined function consumes additional registers, After in-lining function if variables number which are going to use register increases than they may create overhead on register variable resource utilization. This means that when inline function body is substituted at the point of function call, total number of variables used by the function also gets inserted. So the number of register going to be used for the variables will also get increased. So if after function inlining variable numbers increase drastically then it would surely cause an overhead on register utilization.

2) If you use too many inline functions then the size of the binary executable file will be large, because of the duplication of same code.

3) Too much inlining can also reduce your instruction cache hit rate, thus reducing the speed of instruction fetch from that of cache memory to that of primary memory.

4) Inline function may increase compile time overhead if someone changes the code inside the inline function then all the calling location has to be recompiled because compiler would require to replace all the code once again to reflect the changes, otherwise it will continue with old functionality.

5) Inline functions may not be useful for many embedded systems. Because in embedded systems code size is more important than speed.

6) Inline functions might cause thrashing because inlining might increase size of the binary executable file. Thrashing in memory causes performance of computer to degrade.

The following program demonstrates the use of use of inline function.
#include <iostream>
using namespace std;
inline int cube(int s)
{
    return s*s*s;
}
int main()
{
    cout << "The cube of 3 is: " << cube(3) << "\n";
    return 0;
} //Output: The cube of 3 is: 27

======================================
Inline function and classes:
======================================
It is also possible to define the inline function inside the class. In fact, all the functions defined inside the class are implicitly inline. Thus, all the restrictions of inline functions are also applied here. If you need to explicitly declare inline function in the class then just declare the function inside the class and define it outside the class using inline keyword.
For example:
class S
{
public:
    inline int square(int s) // redundant use of inline
    {
        // this function is automatically inline
        // function body
    }
};

The above style is considered as a bad programming style. The best programming style is to just write the prototype of function inside the class and specify it as an inline in the function definition.
For example:
class S
{
public:
    int square(int s); // declare the function
};

inline int S::square(int s) // use inline prefix
{

}

The following program demonstrates this concept:
#include <iostream>
using namespace std;
class operation
{
    int a,b,add,sub,mul;
    float div;
public:
    void get();
    void sum();
    void difference();
    void product();
    void division();
};
inline void operation :: get()
{
    cout << "Enter first value:";
    cin >> a;
    cout << "Enter second value:";
    cin >> b;
}

inline void operation :: sum()
{
    add = a+b;
    cout << "Addition of two numbers: " << a+b << "\n";
}

inline void operation :: difference()
{
    sub = a-b;
    cout << "Difference of two numbers: " << a-b << "\n";
}

inline void operation :: product()
{
    mul = a*b;
    cout << "Product of two numbers: " << a*b << "\n";
}

inline void operation ::division()
{
    div=a/b;
    cout<<"Division of two numbers: "<<a/b<<"\n" ;
}

int main()
{
    cout << "Program using inline function\n";
    operation s;
    s.get();
    s.sum();
    s.difference();
    s.product();
    s.division();
    return 0;
}

Output:

Enter first value: 45
Enter second value: 15
Addition of two numbers: 60
Difference of two numbers: 30
Product of two numbers: 675
Division of two numbers: 3
==============================================================================================================================
What is wrong with macro?
Readers familiar with the C language knows that C language uses macro. The preprocessor replace all macro calls directly within the macro code. It is recommended to always use inline function instead of macro. According to Dr. Bjarne Stroustrup the creator of C++ that macros are almost never necessary in C++ and they are error prone. There are some problems with the use of macros in C++. Macro cannot access private members of class. Macros looks like function call but they are actually not.
Example:
#include <iostream>
using namespace std;
class S
{
    int m;
public:
#define MAC(S::m)    // error
};

C++ compiler checks the argument types of inline functions and necessary conversions are performed correctly. Preprocessor macro is not capable for doing this. One other thing is that the macros are managed by preprocessor and inline functions are managed by C++ compiler.

Remember: It is true that all the functions defined inside the class are implicitly inline and C++ compiler will perform inline call of these functions, but C++ compiler cannot perform inlining if the function is virtual. The reason is call to a virtual function is resolved at runtime instead of compile time. Virtual means wait until runtime and inline means during compilation, if the compiler doesn’t know which function will be called, how it can perform inlining?

One other thing to remember is that it is only useful to make the function inline if the time spent during a function call is more compared to the function body execution time. An example where inline function has no effect at all:
inline void show()
{
    cout << "value of S = " << S << endl;
}

The above function relatively takes a long time to execute. In general function which performs input output (I/O) operation shouldn’t be defined as inline because it spends a considerable amount of time. Technically inlining of show() function is of limited value because the amount of time the I/O statement will take far exceeds the overhead of a function call.

Depending upon the compiler you are using the compiler may show you warning if the function is not expanded inline. Programming languages like Java & C# doesn’t support inline functions.
But in Java, the compiler can perform inlining when the small final method is called, because final methods can’t be overridden by sub classes and call to a final method is resolved at compile time. In C# JIT compiler can also optimize code by inlining small function calls (like replacing body of a small function when it is called in a loop).

Last thing to keep in mind that inline functions are the valuable feature of C++. An appropriate use of inline function can provide performance enhancement but if inline functions are used arbitrarily then they can’t provide better result. In other words don’t expect better performance of program. Don’t make every function inline. It is better to keep inline functions as small as possible.

=========================================================================================================================================
Templates in C++
=================================
Template is simple and yet very powerful tool in C++. The simple idea is to pass data type as a parameter so that we don’t need to write same code for different data types. For example a software company may need sort() for different data types. Rather than writing and maintaining the multiple codes, we can write one sort() and pass data type as a parameter.

C++ adds two new keywords to support templates: ‘template’ and ‘typename’. The second keyword can always be replaced by keyword ‘class’.

Function Templates We write a generic function that can be used for different data types. Examples of function templates are sort(), max(), min(), printArray()
#include <iostream>
using namespace std;

// One function works for all data types.  This would work
// even for user defined types if operator '>' is overloaded
template <typename T>
T myMax(T x, T y)
{
   return (x > y)? x: y;
}

int main()
{
  cout << myMax<int>(3, 7) << endl;  // Call myMax for int
  cout << myMax<double>(3.0, 7.0) << endl; // call myMax for double
  cout << myMax<char>('g', 'e') << endl;   // call myMax for char

  return 0;
}

Output:

7
7
g

Class Templates Like function templates, class templates are useful when a class defines something that is independent of data type. Can be useful for classes like LinkedList, BinaryTre, Stack, Queue, Array, etc.

Following is a simple example of template Array class.
#include <iostream>
using namespace std;

template <typename T>
class Array {
private:
    T *ptr;
    int size;
public:
    Array(T arr[], int s);
    void print();
};

template <typename T>
Array<T>::Array(T arr[], int s) {
    ptr = new T[s];
    size = s;
    for(int i = 0; i < size; i++)
        ptr[i] = arr[i];
}

template <typename T>
void Array<T>::print() {
    for (int i = 0; i < size; i++)
        cout<<" "<<*(ptr + i);
    cout<<endl;
}

int main() {
    int arr[5] = {1, 2, 3, 4, 5};
    Array<int> a(arr, 5);
    a.print();
    return 0;
}

Output:

 1 2 3 4 5

 -----------------------------------------------------------------
Can there be more than one arguments to templates?
Yes, like normal parameters, we can pass more than one data types as arguments to templates. The following example demonstrates the same.
#include<iostream>
using namespace std;

template<class T, class U>
class A  {
    T x;
    U y;
Public:
    A() {    cout<<"Constructor Called"<<endl;   }
};

int main()  {
   A<char, char> a;
   A<int, double> b;
   return 0;
}

Output:

Constructor Called
Constructor Called
------------------------------------------------------------------------
Can we specify default value for template arguments?
Yes, like normal parameters, we can specify default arguments to templates. The following example demonstrates the same.
#include<iostream>
using namespace std;

template<class T, class U = char>
class A  {
public:
    T x;
    U y;
    A() {   cout<<"Constructor Called"<<endl;   }
};

int main()  {
   A<char> a;  // This will call A<char, char>  
   return 0;
}

Output:

Constructor Called
-------------------------------------------------------------------------
What is the difference between function overloading and templates?
Both function overloading and templates are examples of polymorphism feature of OOP. Function overloading is used when multiple functions do similar operations, templates are used when multiple functions do identical operations.
-------------------------------------------------------------------------
How templates work?
Templates are expended at compiler time. This is like macros. The difference is, compiler does type checking before template expansion. The idea is simple, source code contains only function/class, but compiled code may contain multiple copies of same function/class.
--------------------------------------------------------------------------
What happens when there is static member in a template class/function?
Each instance of a template contains its own static variable. See Templates and Static variables for more details.

----------------------------------------
Templates and Static variables in C++
----------------------------------------
Function templates and static variables:
Each instantiation of function template has its own copy of local static variables. For example, in the following program there are two instances: void fun(int ) and void fun(double ). So two copies of static variable i exist.
#include <iostream>

using namespace std;

template <typename T>
void fun(const T& x)
{
  static int i = 10;
  cout << ++i;
  return;
}

int main()
{   
  fun<int>(1);  // prints 11
  cout << endl;
  fun<int>(2);  // prints 12
  cout << endl;
  fun<double>(1.1); // prints 11
  cout << endl;
  getchar();
  return 0;
}

Output of the above program is:

  11
  12
  11
----------------------------------------------------------
Class templates and static variables:
The rule for class templates is same as function templates
Each instantiation of class template has its own copy of member static variables. For example, in the following program there are two instances Test and Test. So two copies of static variable count exist.
#include <iostream>

using namespace std;

template <class T> class Test

private:
  T val;
public:
  static int count;
  Test()
  {
    count++;
  }
  // some other stuff in class
};

template<class T>
int Test<T>::count = 0;

int main()
{
  Test<int> a;  // value of count for Test<int> is 1 now
  Test<int> b;  // value of count for Test<int> is 2 now
  Test<double> c;  // value of count for Test<double> is 1 now
  cout << Test<int>::count   << endl;  // prints 2 
  cout << Test<double>::count << endl; //prints 1
   
  getchar();
  return 0;
}

Output of the above program is:

  2
  1

--------------------------------------------------------------------------------
What is template specialization?
Template specialization allows us to have different code for a particular data type. See Template Specialization for more details.


======================================================================================================================================
STL:-
---------------------------------------------------------------
C++ provides many standard libraries to use different data structures.

Hope you already understand the concept of C++ Template which we already have discussed in one of the chapters. The C++ STL (Standard Template Library) is a powerful set of C++ template classes to provides general-purpose templatized classes and functions that implement many popular and commonly used algorithms and data structures like vectors, lists, queues, and stacks.

At the core of the C++ Standard Template Library are following three well-structured components:
Component     Description
Containers     Containers are used to manage collections of objects of a certain kind. There are several different types of containers like deque, list, vector, map etc.
Algorithms     Algorithms act on containers. They provide the means by which you will perform initialization, sorting, searching, and transforming of the contents of containers.
Iterators     Iterators are used to step through the elements of collections of objects. These collections may be containers or subsets of containers.

We will discuss about all the three C++ STL components in next chapter while discussing C++ Standard Library. For now, keep in mind that all the three components have a rich set of pre-defined functions which help us in doing complicated tasks in very easy fashion.

===================================
Static and dynamic cast:-
------------------------
Static Cast

static_cast doesn't do any run time checking of the types involved, which means that unless you know what you are doing, they could be very unsafe. It also only allows casting between related types, such as pointers or references between Base and Derived, or between fundamental types, such as long to int or int to float.

It does not allow casts between fundamentally different types, such as a cast between a BaseA and BaseB if they are not related. This will result in a compile time error.

Dynamic Cast

dynamic_cast will do run time checking as well, and if the instance cannot be cast into another derived type, it will return a null pointer.

Note: dynamic cast works only when the source type is polymorphic.Otherwise, compiler will give an error such as * error: cannot dynamic_cast 'b' (of type 'class base*') to type 'class inh1*' (source type is not polymorphic)*

Examples

If we have the following classes

class B {};

class D : B {};

then you can do the following

B* b = new D();
D* d1 = static_cast<D*>b; // Valid! d1 is a valid and correct pointer to a D
D* d2 = dynamic_cast<D*>b; // Valid! d2 is a valid and correct pointer to a D

In this example both pointers d1 and d2 will point to a correct typed version of b

The problem comes in the following example:

B* b = new B();
D* d1 = static_cast<D*>b; // Invalid!
D* d2 = dynamic_cast<D*>b; // Valid, but d2 is now a null pointer

Now d1 will point to a data segment of type D*, but the actual data is B*, and will lead to memory issues and corruption. d2 on the other hand will be a null pointer and can be checked for and handled correctly.

Because dynamic_cast performs runtime type checking it is also slower.

EDIT:

Since dynamic_cast can incurr extra runtime, it can be turned off by instructing the compiler not to include Runtime Type Information.

There are also other cast operators.

================================================================

Shell Scripting Questions:-


---------------------------------

#UNIX operating system environment: Hardware -->Kernel--->shell --->user
#List of commonly used UNIX shells:
· The Bourne Shell (sh)
· The C Shell (csh or tsch)
· The Bourne Again Shell (bash)
· The Korn Shell (ksh)
-------------------------------------------------------------------------
$ cat ReadDatdb.sh
filename="$1"
IFS=","
while read f1 f2 f3
do
echo "Name is : $f1"
echo "Age is : $f2"
echo "state is : $f3"
done < "$filename"
--------------------------------------------------------------------------
$ awk -F : `{print $1} /etc/shadow' |uniq -u
--------------------------------------------------------------------------
#!/bin/ksh
myfile=$1
if [ -f $myfile ]
then
echo "$myfile exists"
fi
exit 0
---------------------------------------------------------------------------
The following test command expression would be used to verify the existence of a specified directory, which is stored in the variable $mydir:

if [ -d $mydir ]
then
command(s)
fi
If the value stored in the variable mydir exists and is a directory file, the command(s) located between then and fi will be executed.
---------------------------------------------------------------------------
How would you use AWK to extract the sixth field from a line of text containing colon (:) delimited fields that is stored in a variable called passwd_line?
echo $passwd_line | awk -F: '{ print $6 }'
---------------------------------------------------------------------------
What does 2>&1 mean and when is it typically used?
The 2>&1 is typically used when running a command with its standard output redirected to a file. For example, consider:
command > file 2>&1
Anything that is sent to command's standard output will be redirected to "file" in this example.
The 2 (from 2>&1) is the UNIX file descriptor used by standard error (stderr). Therefore, 2>&1 causes the shell to send anything headed to standard error to the same place messages to standard output (1) are sent...which is "file" in the above example.
----------------------------------------------------------------------------
What are some ways to debug a shell script problem?
Although this is somewhat dependent on what the problem is, there are a few commonly used methods for debugging shell script problems.
One method, which is frequently used across all programming languages, is to insert some debug statements within the shell script to output information to help pinpoint where and why the problem is being introduced.
Another method specific to shell scripting is using "set -x" to enable debugging.
Details:
Consider the following shell script example (notice that "set -x" is commented out at this time):
#!/bin/ksh
#set -x
i=1
while [ $i -lt 6 ]
do
print "in loop iteration: $i"
((i+=1))
done
exit
-----------------------------------------------------------------------------
What is a shell?
Shell is a interface between user and the kernel. Even though there can be only one kernel ; a system can have many shell running simultaneously . Whenever  a user enters a command  through keyboard the shell communicates with the kernel  to execute it and then display the output to the user.
------------------------------------------------------------------------------
Given a file,  replace all occurrence of word “ABC” with “DEF” from 5th line till end in only those lines that contains word “MNO”
sed –n ‘5,$p’ file1|sed ‘/MNO/s/ABC/DEF/’
------------------------------------------------------------------------------
Write a script to print the first 10 elemenst of Fibonacci series.
#!/bin/sh
a=1
b=1
echo $a
echo $b
for I in 1 2 3 4 5 6 7 8
do
c=a
b=$a
b=$(($a+$c))
echo $b
done
--------------------------------------------------------------------------------
What are the 3 standard streams in Linux?

0 – Standard Input
1 – Standard Output
2 – Standard Error
--------------------------------------------------------------------------------
How will you emulate wc –l using awk?
awk ‘END {print NR} fileName’
--------------------------------------------------------------------------------
How will you print the login names of all users on a system?
/etc/shadow file has all the users listed.
awk –F ‘:’ ‘{print $1} /etc/shadow’|uniq -u
--------------------------------------------------------------------------------
How to set an array in Linux?
Syntax in ksh:
Set –A arrayname= (element1 element2 ….. element)
In bash
A=(element1 element2 element3 …. elementn)
---------------------------------------------------------------------------------
Write down the syntax of “for “ loop
Syntax:
for  iterator in (elements)
do
execute commands
done
---------------------------------------------------------------------------------
Write a command sequence to find all the files modified in less than 2 days and print the record count of each.
find . –mtime -2 –exec wc –l {} \;
---------------------------------------------------------------------------------
How can I set the default rwx permission to all users on  every file which is created in the current shell?

We can use:
umask 777
---------------------------------------------------------------------------------
What are the four fundamental components of every file system on linux?
bootblock, super block, inode block and  datablock
#What is a boot block?
This block contains a small program called “Master Boot record”(MBR) which loads the kernel  during system boot up.

#What is a super block?
Super block contains all the information about the file system like size of file system, block size used by it,number of free data blocks and list of free inodes and data blocks.

#What is an inode block?
This block contains the inode for every file of the file system along with all the file attributes except its name.

-----------------------------------------------------------------------------------
How do we create command aliases in shell?
alias Aliasname=”Command whose alias is to be created”
-----------------------------------------------------------------------------------
What are “c” and “b” permission fields of a file?
“c “ and “b” permission fields are generally associated with a device file. It specifies whether a file is a character special file or a block special file.

What is the use of a shebang line?
Shebang line at top of each script determines the location of the engine which is to be used in order to execute the script.

--------------------------------------------------------------------------------------------------------------------------------
#What are the different type of variables used in a shell Script ?
Ans: In a shell script we can use two types of variables :
System defined variables
User defined variables
System defined variables are defined or created by Operating System(Linux) itself. These variables are generally defined in Capital Letters and can be viewed by “set” command.
User defined variables are created or defined by system users and the values of variables can be viewed by using the command “echo $<Name_of_Variable>”
----------------------------------------------------------------------------------------------------------------------------------
How to compare numbers in Linux shell Scripting ?
Ans: test command is used to compare numbers in if-then statement. Example is shown below :
#!/bin/bash
x=10
y=20
if [ $x -gt $y ]
then
echo “x is greater than y”
else
echo “y is greater than x”
fi
-------------------------------------------------------------------------------------------------------------------------------------
Tell me the Syntax of “Case statement” in Linux shell scripting ?

Ans: The basic syntax is shown below :

case word in
value1)
command1
command2
…..
last_command
!!
value2)
command1
command2
……
last_command
;;
esac
----------------------------------------------------------------------------------------------------------------------------------------
What is the basic syntax of while loop in shell scripting ?
Ans: Like the for loop, the while loop repeats its block of commands a number of times. Unlike the for loop, however, the while loop iterates until its while condition is no longer true. The basic syntax is :
while [ test_condition ]
do
commands…
done
-----------------------------------------------------------------------------------------------------------------------------------------
What is the use of “#!/bin/bash” ?
Ans: #!/bin/bash is the first of a shell script , known as shebang , where # symbol is called hash and ‘!’ is called as bang. It shows that command to be executed via /bin/bash.
--------------------------------------------------------------------------------------------------------------------------------------------
What is the syntax of for loop in shell script ?
Ans: Basic Syntax of for loop is given below :

for variables in list_of_items
do
command1
command2
….
last_command
done
----------------------------------------------------------------------------------------------------------------------------------------------
How to debug a shell script ?
Ans: A shell script can be debug if we execute the script with ‘-x’ option ( sh -x myscript.sh). Another way to debug a shell script is by using ‘-nv’ option ( sh -nv myscript.sh).
-----------------------------------------------------------------------------------------------------------------------------------------------

How to test files in a shell script ?
Ans: test command is used to perform different test on the files. Basic test are listed below :
Test
Usage
-d file_name
Returns true if the file exists and is a directory
-e file_name
Returns true if the file exists
-f file_name
Returns true if the file exists and is a regular file
-r file_name
Returns true if the file exists and have read permissions
-s file_name
Returns true if the file exists and is not empty
-w file_name
Returns true if the file exists and have write permissions
-x file_name
Returns true if the file exists and have execute permissions

------------------------------------------------------------------------------------------------------------------------------------------------
How to get input from the terminal for shell script ?
Ans: ‘read’ command reads in data from the terminal (using keyboard). The read command takes in whatever the user types and places the text into the variable you name. Example is shown below :
# vi /tmp/test.sh
#!/bin/bash
echo ‘Please enter your name’
read name
echo “My Name is $name”
# ./test.sh
Please enter your name
LinuxTechi
My Name is LinuxTechi
----------------------------------------------------------------------------------------------------------------------------------------------------
How to unset or de-assign variables ?
Ans: ‘unset’ command is used to de-assign or unset a variable. Syntax is shown below :
# unset <Name_of_Variable>
-----------------------------------------------------------------------------------------------------------------------------------------------------
How to perform arithmetic operation ?
Ans: There are two ways to perform arithmetic operations :
1. Using expr command (# expr 5 + 2 )
2. using a dollar sign and square brackets ( $[ operation ] ) Example : test=$[16 + 4] ; test=$[16 + 4]
------------------------------------------------------------------------------------------------------------------------------------------------------
Basic Syntax of do-while statement ?
The do-while statement is similar to the while statement but performs the statements before checking the condition statement. The following is the format for the do-while statement:
do
{
statements
} while (condition)
--------------------------------------------------------------------------------------------------------------------------------------------------------
How to define functions in shell scripting ?
Ans: A function is simply a block of of code with a name. When we give a name to a block of code, we can then call that name in our script, and that block will be executed. Example is shown below :
$ diskusage () { df -h ; }
---------------------------------------------------------------------------------------------------------------------------------------------------------
How to use bc (bash calculator) in a shell script ?
Ans: Use the below Syntax to use bc in shell script.
variable=`echo “options; expression” | bc`
----------------------------------------------------------------------------------------------------------------------------------------------------------
 How to remove the headers from a file using command in Linux?

Answer : A ‘sed’ command comes to rescue here, when we need to delete certain lines of a file.
Here it the exact command to remove headers from a file (or first line of a file).

# sed '1 d' file.txt
The only problem with above command is that, it outputs the file on standard output without the first line. In order to save the output to file, we need to use redirect operator which will redirects the output to a file.

# sed '1 d' file.txt > new_file.txt
Well the built in switch ‘-i‘ for sed command, can perform this operation without a redirect operator.

# sed -i '1 d' file.txt
3. How will you check the length of a line from a text file?

Answer : Again ‘sed’ command is used to find or check the length of a line from a text file.
A ‘sed –n ‘n p’ file.txt‘, where ‘n‘ represents the line number and ‘p‘ print out the pattern space (to the standard output). This command is usually only used in conjunction with the -n command-line option. So, how to get the length count? Obviously! we need to pipeline the output with ‘wc‘ command.

# sed –n 'n p' file.txt | wc –c
To get the length of line number ‘5’ in the text file ‘tecmint.txt‘, we need to run.

# sed -n '5 p' tecmint.txt | wc -c
4. Is it possible to view all the non-printable characters from a text file on Linux System? How will you achieve this?

Answer : Yes! it is very much possible to view all the non-printable characters in Linux. In order to achieve the above said scenario, we need to take the help of editor ‘vi’.
How to show non-printable characters in ‘vi‘ editor?
Open vi editor.
Go to command mode of vi editor by pressing [esc] followed by ‘:’.
The final step is to type execute [set list] command, from command interface of ‘vi’ editor.
Note: This way we can see all the non-printable characters from a text file including ctrl+m (^M).
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
What is the difference between commands ‘cmp’ and ‘diff’?
Answer : The command ‘cmp’ and ‘diff’ means to obtain the same thing but with different mindset.
The ‘diff‘ command reports the changes one should make so that both the files look the same. Whereas ‘cmp‘ command compares the two files byte-by-byte and reports the first mismatch.
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Is it possible to substitute ‘ls’ command with ‘echo’ command?
Answer : Yes! the ‘ls’ command can be substituted by ‘echo’ command. The command ‘ls’ lists the content of file. From the point of view of replacement of above command we can use ‘echo *’, obviously without quotes. The output of both the commands are same.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
You might have heard about inodes. can you describe inode briefly?
Answer : A ‘inode’ is a ‘data-structure’, which is used for file identification on Linux. Each file on an Unix System has a separate ‘inode’ and an ‘Unique’ inode Number.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

The following table shows a number of special variables that you can use in your shell scripts −

Variable    Description
$0    The filename of the current script.
$n    These variables correspond to the arguments with which a script was invoked. Here n is a positive decimal number corresponding to the position of an argument (the first argument is $1, the second argument is $2, and so on).
$#    The number of arguments supplied to a script.
$*    All the arguments are double quoted. If a script receives two arguments, $* is equivalent to $1 $2.
$@    All the arguments are individually double quoted. If a script receives two arguments, $@ is equivalent to $1 $2.
$?    The exit status of the last command executed.
$$    The process number of the current shell. For shell scripts, this is the process ID under which they are executing.
$!    The process number of the last background command.

=============================================================================================

Python Questions:-

#######################################
# Interview questions asked in Python #
#######################################

Go through this link after tutorial point.
link: https://docs.python.org/2/reference/executionmodel.html

===========================================================================
square of a number:-
-----------------------
#!/usr/bin/python

num = 10;
sqare = num * num;
print "square of a num is :",sqare;

=========================================================================================
sqaure root:-
----------------------
#!/usr/bin/python

n = 100;
i = 1;

j=n;

for i in range (1,n*2):
        j = (j+(n/j))/2

print 'Square root of num is :',j


===============================================================================================
List:- is like an array but has an ability to store data of different data types.
Ex: in an array: we store data of same data type
char arr[]={'m','n','o'} //cant store int here because its a type of char.
Ex: list
list = ['ram','deepak',1,2,3.5,'hello'] // we can store data of any data type.
Memory allocation will be done in sequence manner, index from 0.

like list[0] will print ram
list[1] will print deepak
and so on....

We can update a list:

ex: list[1] = 'pankaj'

now its updated index 1 from deepak to pankaj.
print list[1] will print pankaj not deepak.

you can delete list items.
del list[1] it will delete content at index 1, and will shift all other elements from right to left in list.

ex: print list [1] will print 1 now.

==============================================================================================
Tupple: like a list, tupple is also stores data sequentially, like from index 0 to n.

Difference between them is : we can not update tupple like a list.
and tupple does not use sqare brackets.

tup = ('deepak','ram',1,2.5,6.7,1)
so you cant update tup value by tup[1] = 'pankaj'.

so for that you have to create new tupple.

you can merge the number of tupple in one.

tup3 = tup1+tup2;

Similary in case of deletion, we cant delete a index value in tupple.

Ex:
del tup[1] == not possible  but del list[1] is possible.

so if you want to delete tupple you will have to delete whole tupple.

del tup // it will delete whole tupple.

===========================================================================================
Dictionary:-
--------------
its just like a hash table, which stores values on the basis of 'key:value' pair.

dict = {'Name': 'Zara', 'Age': 7, 'Class': 'First'};

so here Name is the key and its corresponding value is zara, similarly for others.
so you can access value by providing correponding key.

print "dict['Name']: ", dict['Name'];
it will pring 'zara'

print "dict['Age']: ", dict['Age'];
it will pring 7


you can update values for any correspondig key.

like :
dict['Name'] = 'deepak' // it will update value to deepak from zara for key 'Name'.

you can add new key:value pair here.

dict['hobby'] = 'cricket'

Delete directory elements:-
del dict['Name']; # remove entry with key 'Name'
dict.clear();     # remove all entries in dict
del dict ;        # delete entire dictionary

-------------------------

Can we modify a value in tuple?
Ans: No, explained above.

--------------------------\
What do you mean by range function in python ?
Ans: Range function is just use to defined range.

basically we use it in loops

Ex: for i in range (10,20) // so here it means that loop will iterate from 10 to 20

======================================================================================================-
range() and xrange():-
---------------------------
For performance, especially when you're iterating over a large range, xrange() is usually better. However, there are still a few cases why you might prefer range():

In python 3, range() does what xrange() used to do and xrange() does not exist. If you want to write code that will run on both Python 2 and Python 3, you can't use xrange().

range() can actually be faster in some cases - eg. if iterating over the same sequence multiple times.  xrange() has to reconstruct the integer object every time, but range() will have real integer objects. (It will always perform worse in terms of memory however)

xrange() isn't usable in all cases where a real list is needed. For instance, it doesn't support slices, or any list methods.


range creates a list, so if you do range(1, 10000000) it creates a list in memory with 10000000 elements.

xrange :  it is a sequence object .

for performance : prefer use of range() when you are dealing with small numbers, else use xrange().


range creates a list, so if you do range(1, 10000000) it creates a list in memory with 10000000elements.
xrange  is a sequence object that evaluates lazily.

$ python -m timeit 'for i in range(1000000):' ' pass'
10 loops, best of 3: 90.5 msec per loop

$ python -m timeit 'for i in xrange(1000000):' ' pass'
10 loops, best of 3: 51.1 msec per loop

look at tiem taken.

=======================================================================================================================
Different data structures in pytho?:-
------------------------------------

list, set, tupple, dictionary.

When to Use Lists
As shown in the examples above, lists are best used in the following situations:

When you need a mixed collection of data all in one place.
When the data needs to be ordered.
When your data requires the ability to be changed or extended. Remember, lists are mutable.
When you don't require data to be indexed by a custom value. Lists are numerically indexed and to retrieve an element, you must know its numeric position in the list.
When you need a stack or a queue. Lists can be easily manipulated by appending/removing elements from the beginning/end of the list.
When your data doesn't have to be unique. For that, you would use sets.

Sets
-----
A set is an unordered collection with no duplicate values. A set can be created by using the keyword set or by using curly braces {}. However, to create an empty set you can only use the set construct, curly braces alone will create an empty dictionary.

Set Operations

The set structure also supports mathematical operations like:
Union
Intersection
Difference
Symmetric Difference

Here are a few examples:

>>> set1 = set([1, 2, 3])
>>> set2 = set([3, 4, 5])
>>> set1 | set2 #union
set([1, 2, 3, 4, 5])
>>> set1 & set2 #intersection
set([3])
>>> set1 -  set2 #difference
set([1, 2])
>>> set1 ^ set2 #symmetric difference (elements that are in the first set and the second, but not in both)
set([1, 2, 4, 5])
>>>

When to Use Sets
You should choose to use a set in the following situations:

When you need a unique set of data: Sets check the unicity of elements based on hashes.
When your data constantly changes: Sets, just like lists, are mutable.
When you need a collection that can be manipulated mathematically: With sets it's easy to do operations like difference, union, intersection, etc.
When you don't need to store nested lists, sets, or dictionaries in a data structure: Sets don't support unhashable types.
-------------------------------------------------------

Tupple:-
-----------

When to Use Tuples
When you need to store data that doesn't have to change.
When the performance of the application is very important. In this situation you can use tuples whenever you have fixed data collections.
When you want to store your data in logical immutable pairs, triples etc.
----------------------


Dictionray:-
When to Use a Dictionary
When you need a logical association between a key:value pair.
When you need fast lookup for your data, based on a custom key.
When your data is being constantly modified. Remember, dictionaries are mutable.

=======================================================================================

Compile a python program without running it?
Ans:
if you simply want to compile a file or a bunch of files from a terminal, the py_compile module can be executed as a script in the following manner:
python -m py_compile fileA.py
OR
python -m py_compile fileA.py fileB.py fileC.py

===========================================================================================
append() and extend():-
---------------------

The append() method adds a single item to the end of the list.

x = [1, 2, 3]
x.append([4, 5])
x.append('abc')
print x
# gives you
[1, 2, 3, [4, 5], 'abc']

The extend() method takes one argument, a list, and appends each of the items of the argument to the original list. (Lists are implemented as classes. “Creating” a list is really instantiating a class. As such, a list has methods that operate on it.)

x = [1, 2, 3]
x.extend([4, 5])
x.extend('abc')
print x
# gives you
[1, 2, 3, 4, 5, 'a', 'b', 'c']

================================================================================================

__init__  in python?
--------------------

In short:

    self as it suggests, refers to itself- the object which has called the method. That is, if you have N objects calling the method, then self.a will refer to a separate instance of the variable for each of the N objects. Imagine N copies of the variable a for each object
   
    __init__ is what is called as a constructor in other OOP languages such as C++/Java. The basic idea is that it is a special method which is automatically called when an object of that Class is created
   
   
if we compare with java/C++ :-
__init__ is just like a constructor, so when we create an object of that class then __init__ constructor will be called.
self is like a 'this' in C++, that will refer object of the current class.

ex:
 class MyClass(object):
     i = 123
     def __init__(self):
         self.i = 345

a = MyClass() // here we are creating a object, means __init__ constructor will be called here. so it will pring 345
print a.i
345
print MyClass.i // here we not creating object, so it just print value which is set by the class not by the constructor.
123

=======================================================================

__new__ in python?
-----------------
__new__ is the first step of instance creation. It's called first, and is responsible for returning a new instance of your class. In contrast, __init__ doesn't return anything; it's only responsible for initializing the instance after it's been created.

clarification:-
__new__ is used to creat an object.
__int__ is a constructor which will be called when an object will get created.
so __int__ constructor always called after __new__.

Ex:-
class A(object):
    _dict = dict()

    def __new__(cls):    // this function will called first when an object is created
        if 'key' in A._dict:
            print "EXISTS"
            return A._dict['key']
        else:
            print "NEW"
            return super(A, cls).__new__(cls)

    def __init__(self): // this constructor will be called after object creation, after __new__.
        print "INIT"
        A._dict['key'] = self
        print ""

a1 = A() // so here it will call __new__ first and then __init__
a2 = A() // so here it will call __new__ first and then __init__
a3 = A() // so here it will call __new__ first and then __init__

output:-
NEW
INIT

EXISTS
INIT

EXISTS
INIT
=======================================================================

When to use tuple in Python ?
------------------------------

please refer ans of different data structure used in python.

=======================================================================

What are functional and keyword arguments in Python?
---------------------------------------------------

There is one last language feature where the distinction is important. Consider the following function:

def foo(*positional, **keywords):
    print "Positional:", positional
    print "Keywords:", keywords

The *positional argument will store all of the positional arguments passed to foo(), with no limit to how many you can provide.

>>> foo('one', 'two', 'three')
Positional: ('one', 'two', 'three')
Keywords: {}

The **keywords argument will store any keyword arguments:

>>> foo(a='one', b='two', c='three')
Positional: ()
Keywords: {'a': 'one', 'c': 'three', 'b': 'two'}

And of course, you can use both at the same time:

>>> foo('one','two',c='three',d='four')
Positional: ('one', 'two')
Keywords: {'c': 'three', 'd': 'four'}

These features are rarely used, but occasionally they are very useful, and it's important to know which arguments are positional or keywords.
==============================================================================================

list functions in python?
------------------------

cmp,min,max,list

list.append(obj), list.count(obj),list.extend(seq),list.index(obj),list.pop,list.revers(),list.sort(),list.remove().

-===================================================================================================================================================================
Function decorators in python? (abhi mujhe bhi ye sahi se samjhna hia ..time kam hone ki wajah se nahi samajh paya)
------------------------------
A decorator is just a callable that takes a function as an argument and returns a replacement function. We’ll start simply and work our way up to useful decorators.

>>> def outer(some_func):
...     def inner():
...         print "before some_func"
...         ret = some_func() # 1
...         return ret + 1
...     return inner
>>> def foo():
...     return 1
>>> decorated = outer(foo) # 2
>>> decorated()
before some_func
2

Look carefully through our decorator example. We defined a function named outer that has a single parameter some_func. Inside outer we define an nested function named inner. The inner function will print a string then call some_func, catching its return value at point #1. The value of some_func might be different each time outer is called, but whatever function it is we’ll call it. Finally inner returns the return value of some_func() + 1 - and we can see that when we call our returned function stored in decorated at point #2 we get the results of the print and also a return value of 2 instead of the original return value 1 we would expect to get by calling foo.

We could say that the variable decorated is a decorated version of foo - it’s foo plus something. In fact if we wrote a useful decorator we might want to replace foo with the decorated version altogether so we always got our "plus something" version of foo. We can do that without learning any new syntax simply by re-assigning the variable that contains our function:

>>> foo = outer(foo)
>>> foo # doctest: +ELLIPSIS
<function inner at 0x...>

Now any calls to foo() won’t get the original foo, they’ll get our decorated version! Got the idea? Let’s write a more useful decorator.

=================================================================================================
1)    Why python is called object based and object oriented?
2)    What it happening below:
A= 1
A=2
3)    Did base class init method get called automatically by derived class?
4)    Which are mutable and imutabl objects in python?
---Ans:-You have to understand that Python represents all its data as objects. Some of these objects like lists and dictionaries are mutable, meaning you can change their content without changing their identity. Other objects like integers, floats, strings and tuples ... are objects that can not be changed. An easy way to understand that is if you have a look at an objects ID.

Below you see a string that is immutable. You can not change its content. It will through an error at you, if you try to change it. Also, if we assign new content, a new object is created instead of the contents being modified.

>>> s = "abc"
>>>id(s)
4702124
>>> s[0]
'a'
>>> s[0] = "o"
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'str' object does not support item assignment
>>> s = "xyz"
>>>id(s)
4800100
>>> s += "uvw"
>>>id(s)
4800500

You can do that with a list and it will not change the objects identity

>>> i = [1,2,3]
>>>id(i)
2146718700
>>> i[0]
1
>>> i[0] = 7
>>> id(i)
2146718700

To read more about Pythons data model you could have a look at the Python language reference here: http://docs.python.org/reference/datamodel.html
===============================================================================================================================================

5)    How to reverse list in one line with new list object, it should not chande the existing list?
6)    What is new keyword in python?
7)    What is strice function in python?
8)    What is list and string comprehension?
9)    When a derived class inherit a many base classes, which init functions of the base class get invoked when derived object is created?
10)   Which module you have imported while working?
11)   Can we control in python to not allowing some methods outside the class?
12)   How to make your functions so it can receive number of arguments which is not known? He is looking for functions which can take any random number of arguments, 1 or 2 or 3 or 4?
13)   Can we have functions overloading in python?
14)   What is first line is called in pythin #!(python interpreter) .. what is called #!?

------------------------------------------------------------------------------------------------------------------------------------

==========================================================================================================

C++ mutable keyword




The mutable storage class specifier in C++ (or use of mutable keyword in C++)

auto, register, static and extern are the storage class specifiers in C. typedef is also considered as a storage class specifier in C. C++ also supports all these storage class specifiers. In addition to this C++, adds one important storage class specifier whose name is mutable.

What is the need of mutable?

Sometimes there is requirement to modify one or more data members of class / struct through const function even though you don’t want the function to update other members of class / struct. This task can be easily performed by using mutable keyword. Consider this example where use of mutable can be useful. Suppose you go to hotel and you give the order to waiter to bring some food dish. After giving order, you suddenly decide to change the order of food. Assume that hotel provides facility to change the ordered food and again take the order of new food within 10 minutes after giving the 1st order. After 10 minutes order can’t be cancelled and old order can’t be replaced by new order. See the following code for details.
#include <iostream>
#include <string.h>
using std::cout;
using std::endl;
class Customer
{
    char name[25];
    mutable char placedorder[50];
    int tableno;
    mutable int bill;
public:
    Customer(char* s, char* m, int a, int p)
    {
        strcpy(name, s);
        strcpy(placedorder, m);
        tableno = a;
        bill = p;
    }
    void changePlacedOrder(char* p) const
    {
        strcpy(placedorder, p);
    }
    void changeBill(int s) const
    {
        bill = s;
    }
    void display() const
    {
        cout << "Customer name is: " << name << endl;
        cout << "Food ordered by customer is: " << placedorder << endl;
        cout << "table no is: " << tableno << endl;
        cout << "Total payable amount: " << bill << endl;
    }
};
int main()
{
    const Customer c1("Pravasi Meet", "Ice Cream", 3, 100);
    c1.display();
    c1.changePlacedOrder("GulabJammuns");
    c1.changeBill(150);
    c1.display();
    return 0;
}
Output:
Customer name is: Pravasi Meet Food ordered by customer is: Ice Cream table no is: 3 Total payable amount: 100 Customer name is: Pravasi Meet Food ordered by customer is: GulabJammuns table no is: 3 Total payable amount: 150
Closely observe the output of above program. The values of placedorder and bill data members are changed from const function because they are declared as mutable.
The keyword mutable is mainly used to allow a particular data member of const object to be modified. When we declare a function as const, the this pointer passed to function becomes const. Adding mutable to a variable allows a const pointer to change members.
mutable is particularly useful if most of the members should be constant but a few need to be updateable. Data members declared as mutable can be modified even though they are the part of object declared as const. You cannot use the mutable specifier with names declared as static or const, or reference.

As an exercise predict the output of following two programs.
// PROGRAM 1
#include <iostream>
using std::cout;
class Test {
public:
  int x;
  mutable int y;
  Test() { x = 4; y = 10; }
};
int main()
{
    const Test t1;
    t1.y = 20;
    cout << t1.y;
    return 0;
}
// PROGRAM 2
#include <iostream>
using std::cout;
class Test {
public:
  int x;
  mutable int y;
  Test() { x = 4; y = 10; }
};
int main()
{
    const Test t1;
    t1.x = 8;
    cout << t1.x;
    return 0;
}
   
   
----------------------------------------------------------------------------------------------------------------------------------

Find the Minimum length Unsorted Subarray, sorting which makes the complete array sorted

Given an unsorted array arr[0..n-1] of size n, find the minimum length subarray arr[s..e] such that sorting this subarray makes the whole array sorted.

Examples:

1) If the input array is [10, 12, 20, 30, 25, 40, 32, 31, 35, 50, 60], your program should be able to find that the subarray lies between the indexes 3 and 8.
2) If the input array is [0, 1, 15, 25, 6, 7, 30, 40, 50], your program should be able to find that the subarray lies between the indexes 2 and 5.
 


A solution that is much easier to grasp
traverse from 0 -> n-1 keeping max so far and make note of the last position that is not greater than the max so far as R.
traverse from n-1 -> 0 keeping min so far and make note of the last position that is not lesser than the min so far as L.
Sort array from L to R
example 10, 12, 20, 30, 25, 40, 32, 31, 35, 50, 60
from 0->n-1, the last element that is not greater than max so far is 35 as max so far will be 40
from n-1 -> 0, the last element that is not lesser than the min so far is 30 as min so far will be 25
sort elements from 30 to 35 :)