Build a weather forecast system leveraging the power of Windows Azure

In this section, we are going to build a weather forecast system using the power of Windows Azure, Microsoft’s cloud platform. Before we even start with building our application we need to understand the architecture of the application.

1. Understanding the basics of Azure:

Windows Azure is a cloud platform which means we can scale up and scale out our applications built on Azure. Say, we have a website that handles user requests. When we had 100 users per minute, one single IIS server was sufficient to handle those needs. But as the requests increase to 1,00,000 per minute our quota would be limited. So to remove all such restrictions, all good Cloud platforms offer scalability which means you can add to the processing capabilities of your system as and when required. Microsoft Azure offers a greater deal of scalability than any other vendor. It additionally offers you scaling out your apps or Virtual Machines/Servers. Suppose we have 10 servers/VMs to host our web applications each equipped with 2Ghz processor and 2GBs of RAM. Now, we can use this configuration or we can have 20 VMs with 1.4Ghz processing speed and 1GB of RAM; without hindering the capabilities. Instead we increase our fault tolerance and robustness as we increase number of VMs. Also, it reduces the pricing drastically. Here’s an analysis below:

Configuration 1 with 4 Medium Size VMs:
Azure Pricing Model 1
Azure Pricing Model 1
Configuration 2 with 6 small sized VMs:
Azure Pricing Model 2

Azure Pricing Model 2

So the second model helps a lot as far as pricing is concerned.

2. Architecture of the Application

Two core concepts are used in here. We have to understand the topology of the application/service.

  1. User logs into his account and requests for weather data.
  2. User queries are handled by a web service/server called as Web Role in terms of Azure. We shall rename it as ‘Web Role DataServe’.
  3. The Web Role queries or asks for data from a storage service called as Table in Azure.
  4. The weather data as found in the storage table is returned and shown to the user.

Front end Looks like this:

Frontend UI

Frontend UI

Now these are the steps from the front end view. That is, what the user is viewing. What about the fact that how the system can process and retrieve data in real time?
So, The back end view goes like this:

  1. We receive data from a standard open source weather API called openweather API. We run a Python script that periodically stores data for user’s cities. The data obtained is in JSON format.  This periodic engine is called a Worker Role. We rename it as ‘Worker Role DataFetch’.
  2. After data is received it is stored to the Table storage via a standard Python API provided under azure.storage.
  3. So whenever the user asks for data we need not check for it and create unnecessary waiting time.

The architecture is printed below:

architecture and dataflow

architecture and dataflow

3. Code for Worker Role DataFetch:

For the Worker Role we use a Python script that helps us easily parse our JSON data.

from azure.storage import *
import datetime
import re
import urllib2
import math
import time

table_service = TableService(account_name='*********', account_key='**********')

class Temperature(object):
    def toCelcius(self, deg_f):
        return (deg_f-32)*5/9

#minute=minute+1
city='kolkata'
url = "http://api.openweathermap.org/data/2.5/weather?q=kolkata&mode=xml&units=metric"
source=urllib2.urlopen(url)
tomorrow=str(datetime.date.today()+datetime.timedelta(days=1))
htmltext=source.read()
print("")

# search for pattern using regular expressions (.+?)
#if(htmltext.find(adate)!=-1):
direction=''
wind=''
temp=''
humidity=''
condition=''
pattern_direction=re.compile(direction)
pattern_wind=re.compile(wind)
pattern_temp=re.compile(temp)
pattern_humidity=re.compile(humidity)
pattern_cond=re.compile(condition)

# match pattern with htmltext
weather_windspeed=re.findall(pattern_wind,htmltext)
weather_winddirection=re.findall(pattern_direction,htmltext)
weather_temp=re.findall(pattern_temp,htmltext)
weather_cond=re.findall(pattern_cond,htmltext)
weather_humid=re.findall(pattern_humidity,htmltext)

#print "Overall Weather status: ",weather_cond[0][1]
RowKey=datetime.datetime.now().strftime("%d%m%Y%H%M")
temperature = weather_temp[0][0]
#print "Minimum Temperature: ",weather_temp[0][1]
#print "Maximum Temperature: ",weather_temp[0][2]
print "Wind Direction: ",weather_windspeed[0][0]

def checkduplicate():
    tasks = table_service.query_entities('WeatherFetch', "PartitionKey eq 'data'")
    flag=0

    for task in tasks:
        if(task.RowKey==RowKey):
            flag=1
        elif(task.temperature==temperature):
            flag=1

    if(flag==1):
        return True
    else:
        return False

if(not checkduplicate()):
    fetched_data = {'PartitionKey': 'data', 'RowKey': RowKey, 'temperature' : temperature,'min_temperature' : weather_temp[0][1], 'humidity' : weather_humid[0],'windspeed':weather_windspeed[0][0],'winddirection':weather_winddirection[0][1] ,'condition':weather_cond[0][1]}
    table_service.insert_entity('WeatherFetch', fetched_data)
else:
    print 'Foo is empty!'

This basically stores weather forecast data of Kolkata every day. So, this script can be run on a small sized Linux VM via a simple shell script:

#!/bin/bash
#script checks whether time its a full hour like 5:00, 8:00
#and runs a script hourly

now=0
count=0
echo $now
i=12
p=$now
while true
do
now=$(date +"%M")
if [ $(( $now % 100 )) -eq "0" ]; then
count=1
if [ $count -eq "0" ]; then
python /home/abhishek/AzureWorkerRole/AzureTableStore.py
fi
fi
done

This script calls the Python Script hourly and stores weather information in the table.

Now, Table details that means name and primary key can be found at Azure Portal under storage:

authorization for table storage

authorization for table storage

4. Fetching Data from table using Web Role DataServe:

This entire documentation can be found at:

http://www.windowsazure.com/en-us/documentation/articles/storage-dotnet-how-to-use-table-storage-20/#retrieve-all-entities

So, this wraps up another tutorial in Windows Azure.

Entire Code is available on:

https://github.com/abhishekdepro

Happy Coding.

Python Lists and Tuples

In all programming languages that we are quite familiar with, i.e, C, Java, C#, PHP,etc. have support for Arrays while the intelligent OOPs have ArrayList, an interesting feature. But is that all?

If the answer is NO. Let’s switch to Python, a language built with a certain positive vibe not to make a programmer’s life a purgatory. With Python, we have ease of code readability, 100 lines of code written in Java can be equivalent to 10 lines of meaningful Python code. And it’s opensource and reliable. The best part is that you can leave things on the framework and just work with what you need.

Coming to the topic, apart from arrays, Python provides several other quintessential data storage elements:

LISTS :

Lists are pretty much like ArrayLists. You can insert strings,variables,etc. in and out as you wish. Lets see some examples:

  • Open IDLE for Python 2.7. You can download it here.
  • First we create a list named Movies containing 3 movie names
    Movies=["Terminator 3","Titanic"]
    >>> print(Movies)
    ['Terminator 3', 'Titanic']
    
  • Now we’ll add another element to this List. This <listname>.append(object x) will insert object x at the end of the list.
    Movies.append("Avatar")
    >>> print(Movies)
    ['Terminator 3', 'Titanic', 'Avatar']
    >>> 
  • What if we want to insert in some other position?
    Movies.insert(1,"Psycho")
    >>> print(Movies)
    ['Terminator 3', 'Psycho', 'Titanic', 'Avatar']
  • We can even Sort the list.
    Movies.sort()
    >>> print(Movies)
    ['Avatar', 'Psycho', 'Terminator 3', 'Titanic']
  • We can remove an element too.
    Movies.remove("Psycho")
    >>> print(Movies)
    ['Avatar', 'Terminator 3', 'Titanic']

TUPLES :

Tuples can be categorized as Read-only Lists. A List is declared by [] whereas a tuple is denoted by (). Tuples are often very useful when we want a read-only archive.

  • Let’s create a Tuple first. Note that a single item tuple won’t be like (“Titanic”) rather like (“Titanic”,):
    Movies=("Titanic","Avatar")
    >>> print(Movies)
    ('Titanic', 'Avatar')

Fade animation using WPF

It is a sample based on the rich animation that is available under animation libraries of WPF. Here we’ll mainly focus on the fade animations under WPF. First of all you need to use ‘System.Windows.Media.Animation‘ in order to enforce the animation.

  • First of all we need to include ‘System.Windows.Media.Aniimation’
  • Then we’ll use the fade animation when the mouse leaves the area or reappears when mouse enters the area.
  • It can either be a canvas, or an ellipse or any geometrical shape even a grid.

For building or downloading the sample visit my post at MSDN.

Before animation starts :

1

 

After fade effect is activated:

2

Code for entering mouse :

private void canvas1_MouseEnter(object sender, MouseEventArgs e)
{
Canvas c = (Canvas)sender;
DoubleAnimation animation = new DoubleAnimation(2, TimeSpan.FromSeconds(5));
c.BeginAnimation(Canvas.OpacityProperty, animation);
textBlock1.Visibility = Visibility.Hidden;
textBlock2.Visibility = Visibility.Visible;
}

Code for leaving mouse :

private void canvas1_MouseLeave(object sender, MouseEventArgs e)
{
Canvas c = (Canvas)sender;
DoubleAnimation animation = new DoubleAnimation(0, TimeSpan.FromSeconds(5));
c.BeginAnimation(Canvas.OpacityProperty,animation);
textBlock2.Visibility = Visibility.Hidden;
textBlock1.Visibility = Visibility.Visible;
}

Here we create objects Canvas and animations and then apply properties.

using System;
using System.Collections.Generic;
using System.Linq;
using System.ServiceModel;
using System.Text;
using System.Threading.Tasks;

namespace WcfServiceLibrary1
{
    [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
    public class DataService:IDataService
    {
        List<data> datas = new List<data>();
        #region IDataService members

        public void submit_data(Data data)
        {
            data.roll = Guid.NewGuid().ToString();
            datas.Add(data);
        }

        public List<data> GetData()
        {
            return datas;
        }        public void remove_data(string roll)
        {
            datas.Remove(datas.Find(e => e.roll.Equals(roll)));
        }        #endregion
    }
}

High Energy Physics Data Handling using Cloud Computing

We show that distributed Infrastructure-as-a-Service (IaaS) compute clouds can be effectively used for the analysis of high energy physics data. We have to design a distributed cloud system that works with any application using large input data sets requiring a high throughput computing environment. The system uses IaaS-enabled science and commercial clusters situated at different places. We describe the process in which a user prepares an analysis virtual machine (VM) and submits batch jobs to a central scheduler. The system boots the user-specific VM on one of the IaaS clouds, runs the jobs and returns the output to the user. The user application accesses a central database for calibration data during the execution of the application. Similarly, the data is located in a central location and streamed by the running application. The system can easily run one hundred simultaneous jobs in an efficient manner and scalable to many hundreds and possibly thousands of user jobs.

Introduction:

Infrastructure as a Service (IaaS) cloud computing is emerging as a new and efficient way to provide computing to the research community. The growing interest in clouds can be attributed, in part, to the ease of encapsulating complex research applications in Virtual Machines (VMs) with little or no performance degradation [1]. Studies have shown, that high energy physics application code runs equally well in a VM. Virtualization technologies not only offers several advantages such as abstraction from the underlying hardware and simplified application deployment, but in some situations where traditional computing clusters have hardware and software configurations which are incompatible with the scientific application’s requirements, virtualization is the only option available. A key question is how to manage large data sets in a cloud or distributed cloud environment. We have developed a system for running high throughput batch processing applications using any number of IaaS clouds. This system uses software such as Nebula [2] and Nimbus [3], in addition to custom components such as a cloud scheduling element and a VM image repository. The results presented in this work use the IaaS clouds based on Amazon EC2. The total amount of memory and CPU of each computational cluster in the clouds are divided evenly into what we call VM slots, where each of these slots can be assigned to run a VM. When a VM has finished running, that slot’s resources are then released and available to run another VM. The input data and analysis software are located on one of the clouds and the VM images are stored in a repository on the other cloud. The sites are connected by a research network while the commodity network is used to connect the clouds to Amazon EC2. Users are provided with a set of VMs that are configured with the application software. The user submits their jobs to a scheduler where the job script contains a link to the required VM. A cloud scheduling component is implemented (called Cloud Scheduler) searches the job queue, identifies the VM required for each queued jobs, and sends out a request to one of the clouds to boot the user specific VM. Once the VM is booted, the scheduler submits the user job to the running VM. The job runs and returns any output to a user specified location. If there are no further jobs requiring that specific VM, then Cloud Scheduler shuts it down. The system has been demonstrated to work well for applications with modest I/O requirements such as the production of simulated data [4]. The input files for this type of application are small and the rate of production of the output data is modest (though the files can be large). In this work, we focus on data intensive high energy physics applications where the job reads large sets of input data at higher rates. In particular, we use the analysis application of the BaBar experiment [5] that recorded electron-positron collisions at the SLAC National Accelerator Laboratory from 2000-2008. We show that the data can be quickly and efficiently streamed from a single data storage location to each of the clouds. We will describe the issues that have arisen and the potential for scaling the system to many hundreds or thousands of simultaneous user jobs.

Architecture:

4

Data Management:

Analysis jobs in high energy physics typically require two inputs: event data and configuration data. The configuration data also includes a BaBar conditions database, which contains time-dependent information about the conditions under which the events where taken. The event data can be the real data recorded by the detector or simulated data. Each event contains information about the particles seen in detector such as their trajectories and energies. The real and simulated data are nearly identical in format; the simulated data contains additional information describing how it was generated. The user analysis code analyzes one event at a time. In the BaBar experiment the total size of the real and simulated data is approximately 2 PB but users typically read a small fraction of this sample. In this work we use a subset of the data containing approximately 8 TB of simulated and real data. The event data for this analysis was stored in a distributed file system at one cloud. The file system is hosted on a cluster of six nodes, consisting of a Management/Metadata server (MGS/MDS), and five Object Storage servers (OSS). It uses a single gigabit interface/VLAN to communicate both internally and externally. This is an important consideration for the test results presented, because these same nodes also host the IaaS frontend (MGS/MDT server) and Virtual Machine Monitors (OSS servers) for the cloud.

The jobs use Xrootd to read the data. Xrootd [6] is a file server providing byte level access and is used by many high energy physics experiments. Xrootd provides read only access to the distributed data (read/write access is also possible). Though the implementation of Xrootd is fairly trivial, some optimization was necessary to achieve good performance across the network: a read-ahead value of 1 MB and a read-ahead cache size of 10 MB was set on each Xrootd client.

The VM images are stored at the other cloud and propagated to the worker nodes by http. For analysis runs that includes the Amazon EC2 cloud, we store another copy of the VM images on Amazon EC2.

In addition to transferring the input data on demand using Xrootd, the BaBar software is also staged to the VMs on demand using a specialized network file system to reduce the amount of data initially transferred to the clouds when the VM starts by reducing the size of the VM images transferred from the image repository to each cloud site. This not only makes the VM start faster, but also helps mitigate the network saturation after job submission by postponing some of the data transfer to happen later after the job has started.

Results:

A typical user job in high energy physics reads one event at a time where the event contains the information of a single particle collision. Electrons and positrons circulate in opposite directions in a storage ring and are made to collide millions of times per second in the center of the BaBar detector. The BaBar detector is a cylindrical detector with a size of approximately 5 meters in each dimension. The detector measures the trajectories of charged particles and the energy of both neutral and charged particles. A fraction of those events are considering interesting from a scientific standpoint and the information in the detector is written to a storage medium. The size of the events in BaBar are a few kilobytes depending on the number of particles produced in the collision. One of the features of the system is its ability to recover from faults arising either from local system problems at each of the clouds or network issues. We list some of the problems we identified in the processing of the jobs. For example, we find that Cloud resources can be brought down for maintenance and back up again. In our test, the NRC cloud resources were added to the pool of resources after the set of jobs was submitted. The Cloud Scheduler automatically detected the new resources available and successfully scheduled jobs to these newly available resources without affecting already running jobs.

Conclusions: From the users’ perspective, the system is robust and is able to handle intermittent network issues gracefully. We have shown that the use of distributed compute clouds can be an effective way of analyzing large research data sets. This is made possible by the power of the cloud computing and distributed file systems. 

References:

  1. http://iopscience.iop.org/1742-6596/219/5/052015/
  2. http://nebula.nasa.gov
  3. http://www.cloudave.com/2180/scientists-and-cloud-computing-part-2/
  4. http://iopscience.iop.org/1742-6596/256/1/012003/
  5. http://www-public.slac.stanford.edu/babar/
  6. http://portal.acm.org/citation.cfm?id=1391157.1391203

Mark sheet maintenance using WCF

Before I start with this article I’d like you to know certain things about WCF or the Windows Communication Foundation. We all have heard about servers and applications running at server end(the back end) like, a Java Servlet. The basic idea in the infrastructure is that certain parts of the applications are deployed as services across the hosts, which means, the application is not running in a native machine but as a service across several machines connected and sharing stuff between themselves through a network.

It is a system for creating connections between applications using services and endpoints. WCF is, more than anything, an infrastructure technology for messages. Just as roads support cars, and as electricity travels over wires and cables, and as pipes convey water, WCF exists to transfer messages between any two endpoints. And it does so securely as well. That is, you can create messages that are encrypted to keep your information safe from being tampered with. A standard example will be Data integration service for any Windows Forms Application that you develop or a WPF one, even a Silverlight based RIA (Rich Internet Application).

So, before we start, let’s get the basics clear :

  • We have a DataContract where we add Data members.
  • We have a ServiceContract where we mention about operations to be performed on the data, i.e, we declare methods.
  • Finally, we have ServiceBehaviour where we assign how the service needs to be executed or how our WCF application should behave.

So, getting this correct we shall move on to our project.

  1. First create a WCF Service Library in Visual Studio :

Image2. Add a Data.cs class to the project and enter the following code :

Screenshot - 2_24_2013 , 8_30_24 PM

3.  Now add another class to the project named IDataService.cs, change it to an interface(instead of a class), and enter the code:

Screenshot - 2_24_2013 , 8_35_17 PM

4.  Now add another class named as DataService.cs and add the following code :

Screenshot - 2_24_2013 , 8_38_30 PM

5. Now build the project.

6.   Now that our model is ready we need to make certain changes to the App.Config file so that our application works as we need in the host.

First, we edit WCF configuration of app.config file by right clicking, check the service tab, and we have a browse window go to e:\programming\c#\marksheet\marksheet\bin\Debug (or project location) and choose the appropriate service dll.

Screenshot - 2_24_2013 , 8_46_02 PM

 

Now, let’s select some end points by choosing empty endpoint name and in service endpoint window select contract browse to appropriate service like above.

Screenshot - 2_24_2013 , 8_48_22 PM

 

Now, close, save all if prompted and deploy (Ctrl+F5).

Screenshot - 2_24_2013 , 8_49_46 PM

 

Now there you have your methods, lets deploy the Submit_Data() method to add an entry to our Database and then invoke :

Screenshot - 2_24_2013 , 8_51_28 PM

 

Continue, for other methods, and that’s it. Simple and easy.

 

Virtualization with XEN : The backend of the Cloud

What is XEN ?

Xen is the most popular Open Source Virtualization software that allows multiple OS to
run on the same computer hardware concurrently, thereby improving the effective usage
and efficiency of the underlying hardware. It benefits the enterprises with the power of
consolidation, increased utilization and rapid provisioning.

The back end of our cloud setup runs Xen hyper-V to support virtualization of instances or nodes. The Eucalyptus-nc package is installed in this Node controller(s) running our back end.

Steps for BACK END setup:

ü Prepare a raw Ubuntu 12.04 system preferably server edition.

ü Install Xen hypervisor following these steps:

o    sudo sed -i 's/GRUB_DEFAULT=.*\+/GRUB_DEFAULT="Xen 4.1-amd64"/' /etc/default/grub
o    sudo update-grub
o    sudo sed -i 's/TOOLSTACK=.*\+/TOOLSTACK="xm"/' /etc/default/xen
o    sudo reboot

3. Check for running hyper-V:
o    sudo xm listFollowing output is obtained:
o    Name                      ID  Mem  VCPUs   State       Time(s)
o    Domain-0                   0  945     1    r-----      11.3

Look before you leap – Linux or Windows

So, what do you mean by the term Operating System?
Is it Microsoft Windows XP, Vista, 7 or 8 ? Or are you a newbie who has just heard the term LINUX & Ubuntu or Fedora? And are you confused about what to select or are you just one of those who think “Windows is a necessity”. Then I must say that you are wrong, because YES there are options but you need to wisely select your Operating System according to your needs.

Now the different operating systems used are –>

  • Microsoft Windows – The most popular of all operating systems because it has been supporting and has been supported by all type of applications that you will ever need since 1985.
  • Mac OS – The proprietary operating system of the Apple Macintosh and MacBook. So, if you want a Mac OS then buy a Macintosh or a MacBook.
  • Linux-Based – Based on the Linux kernel developed by Linus Trovalds, many organizations like Canonical, Red Hat and others have developed many distributions (or distros) which are hardly of any difference from each other.
  • BSD & Solaris – Meant only for developers.

Now there are basically three types of users –>

  1. Developers – Depending on what they work on, they choose their operating systems.
  2. Gamers – Well these guys actually don’t have any other option except MS Windows.
  3. Mac users – They never thought of anything else than the Mac OS.
  4. Home users – These kind of users mostly use their computers for media entertainment and web browsing, thus leaving them to a lot of available options.

Now as most home users just use their computers for listening to their favorite tracks while surfing the web or watching a movie, they actually don’t need to buy a MS Windows or any other paid operating systems while there are linux-based operating systems just for free because it’s open source.

A daily home user can go for a full Linux installation in place of buying a MS Windows, and its even more resource friendly than Windows, i.e. it uses less of your RAM and is compatible with even the previous generation processors while they are no less of a eye candy than Windows, as well as it is immune to windows’ viruses. Thus you can save a lot of your money while buying your hardware (if you are going to invest in new hardware) or salvage your old computer as well as while choosing your operating system because you can get the operating system as well as all applications you need just for free of cost.

Now just for making it easy for you to select the most suitable linux distro as a newbie, I should advice you to go for Ubuntu if you have atleast a moderate speed internet connection, and if you have a slow speed internet connection or no internet connection at all then go for Linux Mint or Sabayon.
Now where to get a linux distro? Download them simply by clicking on the following links –>

Well you can order for Ubuntu CDs and DVDs here, just pay the shipping cost or you can also follow magazines like Digit or Linux For You, they pack free linux distros every month.

So, Have a Happy Linux Experience as a Newbie!

For any further assistance regarding linux installations, comment here…

A simple Mail client using C#

Every now and then we need to e-mail our friends, family and more. But the available suites are not only complicated but also time consuming. They are mainly targeted for business purposes like Microsoft Office Outlook, or Windows Live Mail. Those of you who use such mail clients definitely know that these programs have got a jerky performance as well as keeps you waiting for a minute or so just to synchronize(send/receive) mails from your mail accounts. What if I just need to send a mail in a couple of seconds to my friend or to a mailbox for a subscription closing in a minute. Well, here comes the time complexity. We need faster client-side apps for mailing. I have understood the need of such a software and I’m glad to present it before you.

You can download it from here

A screenshot of Mammail.

.

Building the Sample

Weneed to understand the parts of the program. First develop an UI that is suitable for a mail client, i.e, it must contain :

i. A sender field and a receiverfield.

ii. A credential panel to sign in using your gmail account credentials (username & password).

iii. A SUBJECTbox to type the subject.

iv. An attachmentbox to attach files.

v. A messagebox to fill in the mail.

Description

The UI would look something like this :

UI_screenshot

Here’s the code for all developers :

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Net.Mail;
using System.Net.Mime;

namespace mail_client
{
public partial class Form1 : Form
{
String path;
//string str1, str2;
MailMessage mail = new MailMessage();
public Form1()
{
InitializeComponent();
}

private void button5_Click(object sender, EventArgs e)
{
if (textBox4.Text == "" || textBox5.Text == "")
{
MessageBox.Show("Please enter proper credentials\n");
}
else
{
MessageBox.Show("Successfully logged in");
}

}
private void button3_Click(object sender, EventArgs e)
{
SmtpClient SmtpServer = new SmtpClient();
SmtpServer.Credentials = new System.Net.NetworkCredential(textBox4.Text, textBox5.Text);
SmtpServer.Port = 587;
SmtpServer.Host = "smtp.gmail.com";
SmtpServer.EnableSsl = true;
mail = new MailMessage();
String[] send_from = textBox1.Text.Split(',');
try
{
mail.From = new MailAddress(textBox4.Text, textBox4.Text, System.Text.Encoding.UTF8);
Byte i;
for (i = 0; i &lt; send_from.Length; i++)
mail.To.Add(send_from[i]);
mail.Subject = textBox3.Text;
mail.Body = richTextBox1.Text;
if (listBox1.Items.Count != 0)
{
for (i = 0; i &lt; listBox1.Items.Count; i++)
mail.Attachments.Add(new Attachment(listBox1.Items[i].ToString()));
}
string page;
page = "&lt;html&gt;&lt;body&gt;&lt;table border=2&gt;&lt;tr width=100%&gt;&lt;td&gt;&lt;/body&gt;&lt;/html&gt;";
AlternateView aview1 = AlternateView.CreateAlternateViewFromString(page + richTextBox1.Text, null, MediaTypeNames.Text.RichText);
mail.AlternateViews.Add(aview1);
mail.IsBodyHtml = true;
//mail.DeliveryNotificationOptions = DeliveryNotificationOptions.OnFailure;
mail.DeliveryNotificationOptions = DeliveryNotificationOptions.OnSuccess;
if (mail.DeliveryNotificationOptions == DeliveryNotificationOptions.OnSuccess)
{
MessageBox.Show("Mail has been sent to: {0}",textBox1.Text);
}
mail.ReplyTo = new MailAddress(textBox1.Text);
SmtpServer.Send(mail);
}
catch (Exception x)
{
MessageBox.Show(x.ToString());
}
}

private void button1_Click(object sender, EventArgs e)
{
OpenFileDialog dialogue1=new OpenFileDialog();

Form1.DefaultFont.Style.CompareTo(System.Drawing.FontStyle.Strikeout);// = Color.BlueViolet;
if (dialogue1.ShowDialog() == DialogResult.OK)
{
listBox1.Items.Add(dialogue1.FileName);
}
}

private void button4_Click(object sender, EventArgs e)
{
Application.Exit();
}

private void textBox3_MouseEnter(object sender, EventArgs e)
{
//textBox3.Focus();

//richTextBox1.Focus();
}
private void button1_MouseEnter(object sender, EventArgs e)
{
button1.BackColor = Color.Aqua;
}
private void button1_MouseLeave(object sender, EventArgs e)
{
button1.BackColor = Control.DefaultBackColor;
}
private void button2_MouseEnter(object sender, EventArgs e)
{
button2.BackColor = Color.Aqua;
}
private void button2_MouseLeave(object sender, EventArgs e)
{
button2.BackColor = Control.DefaultBackColor;
}
private void button3_MouseEnter(object sender, EventArgs e)
{
button3.BackColor = Color.Aqua;
}
private void button3_MouseLeave(object sender, EventArgs e)
{
button3.BackColor = Control.DefaultBackColor;
}
private void button4_MouseEnter(object sender, EventArgs e)
{
button4.BackColor = Color.Aqua;
}
private void button4_MouseLeave(object sender, EventArgs e)
{
button4.BackColor = Control.DefaultBackColor;
}
private void button5_MouseEnter(object sender, EventArgs e)
{
button5.BackColor = Color.Aqua;
}
private void button5_MouseLeave(object sender, EventArgs e)
{
button5.BackColor = Control.DefaultBackColor;
}

private void button1_MouseClick(object sender, EventArgs e)
{
button1.BackColor = Color.Gold;
}

private void textBox4_TextChanged(object sender, EventArgs e)
{
textBox2.Text = textBox4.Text;
}
}
}

Happy coding and development! :)

Surviving 15 days with(or without) SMS

Time and time again, TRAI has imposed bans and limits on the norms of telecommunications including calls,sms,mms and even data. Now, backed by the Prime Minister of our country they have imposed a 15 day regulation according to which an individual can send only 5 sms per day, 20kb data/sms. It has been imposed to limit bulk text messaging which has led to an upsurge followed by exodus of North-East Indians from Southern India especially, Karnataka and Maharashtra. You can read the full TOI(Times Of India) article here.

Now the big question: How shall you survive?

Without a text message, or our cell phones ringing or vibrating or beeping every few minutes with an incoming text message, it’s really tough time out here in this scenario. So, I shall recommend some smart moves:

1.Normal Symbian S40 users:

Switch to Nimbuzz, e-buddy, or g-talk or any messaging service in built on your phone.You can download :

Nimbuzz .

e-buddy(for Nokia customers) or for others.

WhatsApp.

2.Symbian S60 Smartphone users:

Switch to Skype or G-talk or WhatsApp Messenger. You can download:

Skype.

WhatsApp.

3. Android users:

Switch to Viber or Skype or WhatsApp. You can download:

Viber.

WhatsApp.

Skype.

4. Iphone Users:

Switch to Skype or WhatsApp. You can download:

Skype.

WhatsApp.

5. Windows Phone Users:

Switch to Skype or WhatsApp. You can download:

Skype.

WhatsApp.

5. Blackberry Boys:

Switch to WhatsApp. You can download:

WhatsApp.

Google Talk.

Windows Live Messenger.

Enjoy these 15 days with the freedom of the internet!

How to Bypass Key Loggers?

Are you doubtful about your PC being infected with some kind of key-logger? Do you think that your boss is spying on our social accounts? Fear not! There is a simple yet effective way to beat key loggers. The catch is that most key-loggers record what is being typed through the keyboard. Not mouse clicks. So we are going to exploit that.

Just follow these steps right away.

1. Go to the Login/Sign-in page.

2. Tap [The_start_button] and R together on your keyboard.
A dialogue box will open up.

3. Type in “osk” and press ENTER.

4. The virtual keyboard will open up.

5. Type in your required username and password.

That’s it.

(NOTE: The mentioned process works only for Windows platform)