Monthly Archives: June 2013

How to Start / Create Your Own Website: The Beginner’s A-Z Guide

Source From :

This tutorial shows you how to make or create a website. It is intended for the beginner and layperson, taking you step by step through the whole process from the very beginning. It makes very few assumptions about what you know (other than the fact that you know how to surf the Internet, since you’re already reading this article on the Internet). As some steps are more involved, this guide also links to selected relevant articles on that you will need to click through to read for more information.

The Essential Step-by-Step Guide to Making Your Own Website

  1. Get Your Domain Name

    The first thing you need to do before anything else is to get yourself a domain name. A domain name is the name you want to give to your website. For example, the domain name of the website you’re reading is “”. To get a domain name, you have to pay an annual fee to a registrar for the right to use that name. Getting a name does not get you a website or anything like that. It’s just a name. It’s sort of like registering a business name in the brick-and-mortar world; having that business name does not mean that you also have the shop premises to go with the name.

  2. Choose a Web Host and Sign Up for an Account

    A web host is basically a company that has many computers connected to the Internet. When you place your web pages on their computers, everyone in the world will be able to connect to it and view them. You will need to sign up for an account with a web host so that your website has a home. If getting a domain name is analogous to getting a business name in the brick-and-mortar world, getting a web hosting account is analogous to renting office or shop premises for your business.

    • There are many issues involved in finding a good web host. Read up on the various things you need to look for in searching for a good web host in the articleHow to Choose a Web Host.
    • After you have an idea of what to look for, you can search for one from the Budget Web Hosting page. You can also find out which web host I’m currently using from the Which Web Host Do You Recommend? page.

    After you sign up for a web hosting account, you will need to point your domain to that account on your web host. Information on how to do this can be found in the guide How to Point a Domain Name to Your Website (Or What to Do After Buying Your Domain Name).

  3. Designing your Web Pages

    Once you have settled your domain name and web host, the next step is to design the web site itself. In this article, I will assume that you will be doing this yourself. If you are hiring a web designer to do it for you, you can probably skip this step, since that person will handle it on your behalf.

    • Although there are many considerations in web design, as a beginner, your first step is to actually get something out onto the web. The fine-tuning can come after you’ve figured out how to get a basic web page onto your site. One way is to use a WYSIWYG (“What You See Is What You Get”) web editor to do it. Such editors allow you to design your site visually, without having to muck around with the technical details. They work just like a normal wordprocessor.
      There are many commercial and free web editors around. For those who don’t mind spending money on a good commercial web editor, one of the most highly-regarded WYSIWYG web editors is Dreamweaver. If you are planning to use this editor, has an online tutorial called Dreamweaver CS5.5 Tutorial: How to Design a Website with Dreamweaver CS5.5. The tutorial takes you through all the steps of creating a fully-functional website with multiple pages and a feedback form, and provides you with the theoretical and practical foundation that will help you create and maintain your site.
      If you prefer to use free software, you can find a complete tutorial on using KompoZer, a free WYSIWYG web editor, in the article How to Design and Publish Your Website with KompoZer. Like my Dreamweaver tutorial, this one also guides you through the process of creating a website that has a home page, an about page, a site map, a links page and a feedback form. It also shows you some of the main features of the KompoZer software so that you can go on improving and updating your website on your own.
      There are many other web design software around. If you prefer not to use either of the above, you can find some others listed on’s Free HTML Editors and WYSIWYG Web Editors page. I also have tutorials for a few other WYSIWYG web editors on this site.
    • After you have followed my tutorial, and are on the way to designing your website, you might want to read the article Appearance, Usability and Search Engine Visibility in Web Design as well. The article takes a brief look at some of the real world issues that every web designer must deal with.
    • An integral part of web design is search engine readiness. Search engine promotion does not start after the web site is made. It starts at the web design stage. The article 6 Tips on How to Create a Search Engine Friendly Website is a must-read. My article on How to Improve Your Search Engine Ranking on Googleis also important for the simple reason that Google is the most popular search engine around, at least at the time this article was written.
    • There are many other issues regarding the design of web pages. The above will get you started. However, if you have the time after you get something out onto the web, you may want to read my other articles on Web Design and Website Promotion and Search Engine Ranking.
  4. Testing Your Website

    Although I list this step separately, this should be done throughout your web design cycle. I list it separately to give it a little more prominence, since too few new webmasters actually perform this step adequately.
    You will need to test your web pages as you design them in the major browsers: the latest versions of Internet Explorer (version 9 at the time of this writing), Firefox,OperaSafari and Chrome. All these browsers can be obtained free of charge, so it should be no hardship to get them. Unfortunately, directly testing your site in all these browsers is the only way you can really be sure that it works the way you want it to on your visitors’ machines.

    Optional: If you have the time, you may want to read my article on how to test your website in multiple versions of Internet Explorer and check your site under earlier versions of Internet Explorer (“IE”), namely IE 8, IE 7, and IE 6. This is not strictly necessary nowadays, since the main culprit causing website problems, IE 6, is slowly disappearing from the Internet, with IE 7 following in its heels.

    If you want to improve the chances that your website will work in future versions of all web browsers, consider validating the code for your web pages. In layman’s language, this means that you should check that the underlying code of your web page, called “HTML” and “CSS”, have no syntax errors. You don’t actually need technical knowledge of HTML and CSS to validate the page, since you can use one of the numerous free web page validators around to do the hard work. On the other hand, if the validator tells you that your page has errors, it may sometimes be hard to figure out what’s wrong (and whether the error is actually a serious one) if you don’t have the requisite knowledge. Having said that, some validators actually give concrete suggestions on how to fix your code, and one of them, called “HTML Tidy”, is even supposed to be able to fix your code for you.

  5. Collecting Credit Card Information, Making Money

    If you are selling products or services, you will need some way to collect credit card information. You should read up on How to Accept Credit Cards on Your Website. I also have a step by step guide on How to Add an Order Form or a “Buy Now” button using PayPal to a Website for those using PayPal.
    If you need advertisers for your website, you might want to read How to Make Money From Your Website and the follow-up article How to Increase Your Website Revenue from Affiliate Programs. A list of advertisers and affiliate programs can be found on Affiliate Programs: Free Sponsors and Advertisers. Those companies are on the constant lookout for new web publishers to display their advertisements.

  6. Getting Your Site Noticed

    When your site is ready, you will need to submit it to search engines like Google and Bing. You can use the links below to do this.

    In general, if your site is already linked to by other websites, you may not even need to submit it to these search engines. They will probably find it themselves by following the links on those websites.
    Apart from submitting your site to the search engine, you may also want to consider promoting it in other ways, such as the usual way people did things before the creation of the Internet: advertisements in the newspapers, word-of-mouth, etc. There are even companies on the Internet, like PRWeb, that can help you create press releases, which may get your site noticed by news sites and blogs. As mentioned in my article on More Tips on Google Search Engine Results Placement, you can also advertise in the various search engines. Although I only mentioned Google in that article, since that was the topic of that discussion, you can alsoadvertise in other search engines like Bing and Yahoo!. This has the potential of putting your advertisement near the top of the search engine results page, and possibly even on other websites.
    There are also less obvious ways of promoting your website, which you might want to look into.


Naturally the above guide is not exhaustive. It is a distillation of some of the essential steps in getting started with your site. If you want more information, you should read the other articles on However, the above tutorial should be enough to help you put your website on the Internet.
Copyright © 2006-2013 Christopher Heng. All rights reserved.
Get more free tips and articles like this, on web design, promotion, revenue and scripting, from

JavaScript motion detection

Source From :


Prerequisite knowledge

JavaScript beginner/intermediate, HTML5 beginner/intermediate and a basic knowledge of jQuery. Due to its use of the getUserMedia and audio APIs, the demo requires Chrome Canary and must be run via a local web server.

User level


Additional required other products (third-party/labs/open source)

In this article, I discuss how to detect a user movement using a webcam stream in JavaScript. The demo shows a video gathered from the user webcam. The user can play notes on a xylophone built in HTML, using their own physical movements, and in real time. The demo is based on two HTML5 “works in progress”: the getUserMedia API displays the user’s webcam, and the Audio API plays the xylophone notes.
I also show how to use a blend mode to capture the user’s movement. Blend modes are a common feature in languages that have a graphics API and almost any graphics software. To quote Wikipedia about blend modes,  “Blend modes in digital image editing are used to determine how two Layers are blended into each other.” Blend modes are not natively supported in JavaScript but, as it is nothing more than a mathematical operation between pixels, I create a blend mode “difference”.
I made a demo to show where web technologies are heading. JavaScript and HTML5 will provide tools for new types of interactive web applications. Exciting!

HTML5 getUserMedia API

As of the writing of this article, the getUserMedia API is still a work in progress. With the fifth major release of HTML by the W3C, there has been a surge of new APIs offering access to native hardware devices. Thenavigator.getUserMediaAPI() API provides the tools to enable a website to capture audio and video.
Many articles on different blogs cover the basics of how to use the API. For that reason, I do not explain it too much in detail. At the end of this article is a list of useful links about getUserMedia if you wish to learn more about it. Here, I show you how I enabled it for this demo.
Today, the getUserMedia API is usable only in Opera 12 and Chrome Canary, both of which are not public release yet. For this demo, you must use Chrome Canary because it supports the AudioContext to play the xylophone notes. AudioContext is not supported by Opera.
Once Chrome Canary is installed and launched, enable the API. In the address bar, type: about:flags. Under the Enable MediaStream option, click the toggle to enable the API.

Figure 1. Enable MediaStream.
Figure 1. Enable MediaStream.

Lastly, due to security restrictions, you must run the sample files within your local web server as video camera access is denied for local file:/// access.
Now that everything is ready and enabled, add a video tag to play the webcam stream. The stream received is be attached to the src property of the video tag using JavaScript.

In the JavaScript, you go through two steps. The first one is to find out if the user’s browser can use the getUserMedia API.

function hasGetUserMedia() { return !!(navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia); }

The second step is to try to get the stream from the user’s webcam.
Note: I’m using jQuery in this article and in the demo, but feel free to use any selector you like, such as a native document.querySelector.

var webcamError = function(e) { alert('Webcam error!', e); }; var video = $('#webcam')[0]; if (navigator.getUserMedia) { navigator.getUserMedia({audio: true, video: true}, function(stream) { video.src = stream; }, webcamError); } else if (navigator.webkitGetUserMedia) { navigator.webkitGetUserMedia({audio:true, video:true}, function(stream) { video.src = window.webkitURL.createObjectURL(stream); }, webcamError); } else { //video.src = 'video.webm'; // fallback. }

You are now ready to display a webcam stream in a HTML page. The next section provides an overview of the structure and assets used in the demo.
If you hope to use getUserMedia in production, I recommend following Addy Osmani’s work. Addy has created a shim for the getUserMedia API with a Flash fallback.

Blend mode difference

The detection of the user’s movement is performed using a blend mode difference.
Blending two images using difference is nothing more than substracting pixels values. In the Wikipedia article about blend modes is a description of difference: “Difference subtracts the top layer from the bottom layer or the other way round, to always get a positive value. Blending with black produces no change, as values for all colors are 0.”
To perform this operation, you need two images. The code loops over each pixel of the image, substracts each color channel from the first image to each color channel of the second image.
For example, from two red images, the color channels at every pixel in both images is:
– red: 255 (0xFF)
– green: 0
– blue: 0
The following operations substract the colors values from these images:
– red: 255 – 255 = 0
– green: 0 – 0 = 0
– blue: 0 – 0 = 0
In other words, applying a blend mode difference on two identical images produces a black image. Let’s discuss how that is useful and where those images coming from.
Taking the process step by step, first draw an image from the webcam stream in a canvas, at a certain interval. In the demo, I draw 60 images per second, which is more than you need. The current image displayed in the webcam stream is the first image that you blend into another.
The second image is another capture from the webcam but at the previous time interval. Now that you have two images, subtract their pixels values. This means that if the images are identical – in other words, if the user is not moving at all – the operation produces a black picture.
The magic happens when the user starts to move. The image taken at the current time interval is be slightly different than the image of the previous time interval. If you substract different values, some colors start to appear. This means that something moved between these two frames!

Figure 2. Example of a blended image from a webcam stream.
Figure 2. Example of a blended image from a webcam stream.

It starts to make sense as the motion detection process is almost finished. The last step is looping over all of the pixels in the blended image to determine if there are some pixels that are not black.

Assets preparation

To make this demo work, you need several things that are pretty common to all websites. You need, of course, an HTML page that contains some canvas and video tags, and some JavaScript to make the application run.
Then, you need an image of the xylophone ideally with a transparent background (png). In addition, you need images for each key on the xylophone, as a separate image with a small brightness change. Using a rollover effect, these images give the user visual feedback, highlighting the triggered note.
You need an audio file containing each note’s sound for which mp3 files will do. Playing a xylophone without sound wouldn’t be much fun.
Finally, I think it is important to show a video fallback of the demo in case the user can’t or doesn’t want to use their webcam. You need to create a video and encode it to mp4, ogg, and webm to cover all browsers. The next section provides a list of the tools used to encode the videos. You can find all the assets in the demo zip file,

Encode HTML5 videos

I encoded the video demo fallback into the common three formats needed to display HTML5 videos.
I had some trouble with the webm format as most of the tools I found for encoding would not give me the option to choose a bitrate. The bitrate sets both the video file size and quality, usually written as kbps (kilobits per second). I ended up using Online ConVert Video converter to convert to the WebM format (VP8), which worked well:
For the mp4 version, you can use tools from the Adobe Creative Suite, such as the Adobe Media Encoder or After Effects. Or you can also use a free alternative such as Handbrake.
For the ogg format, I used some tools that encode to all formats at once. I wasn’t happy with the quality of the webm and mp4 formats. Although I couldn’t seem to change the quality output, the ogg video was fine. You can use either Easy HTML5 Video or Miro.

Prepare the HTML

This is the last step before starting to code some JavaScript. Set some html tags. Here are the required HTML tags. (I won’t list everything I used, such as the video demo fallback code which you can download in the source.)
You need a simple video tag to receive the user’s webcam feed. Don’t forget to set the autoplay property. Without it, the stream pauses on the first frame received.

The video tag will actually not be displayed. Its css display style is set to none. Instead, draw the stream received in acanvas so I can use it for the motion detection. Create the canvas to draw the webcam stream:

You also need another canvas to show what’s happening in real time during the motion detection:

Create a div that contains the xylophone images. Place the xylophone on top of the webcam: the user can virtually use their hand to play with it. On top of the xylophone are placed the notes, hidden, that display on a rollover when the note is triggered.

JavaScript motion detection

The JavaScript steps to make this demo work are as follows:
  • detect if the application can use the getUserMedia API
  • detect if the webcam stream is being received
  • load the sounds of the xylophone notes
  • start a time interval and call an update function
  • at each time interval, draw the webcam feed  onto a canvas
  • at each time interval, blend the current webcam image into the previous one
  • at each time interval, draw the blended image onto a canvas
  • at each time interval, check the pixel color values in the areas of the xylophone notes
  • at each time interval, play a specific xylophone note if a motion detection is found

First step: prepare variables

Prepare some variables to store what you for the drawing and motion detection. You need two references to the canvas, a variable to store the context of each canvas, and a variable to store the drawn webcam stream. Also store the x axis position of each xylophone note, the sounds notes to play, and some sound-related variables.

var notesPos = [0, 82, 159, 238, 313, 390, 468, 544]; var timeOut, lastImageData; var canvasSource = $("#canvas-source")[0]; var canvasBlended = $("#canvas-blended")[0]; var contextSource = canvasSource.getContext('2d'); var contextBlended = canvasBlended.getContext('2d'); var soundContext, bufferLoader; var notes = [];

Also invert the x axis of the webcam stream so the user feels like they are in front of a mirror. This makes their movement to reach the notes a bit easier. Here is how to do that:

contextSource.translate(canvasSource.width, 0); contextSource.scale(-1, 1);

Second step: update and draw video

Create a function called update that will be executed 60 times per second and will call other functions that draw the webcam stream onto a canvas, blend the images, and detect the motion.

function update() { drawVideo(); blend(); checkAreas(); timeOut = setTimeout(update, 1000/60); }

Drawing the video onto a canvas is quite easy and it takes only one line:

function drawVideo() { contextSource.drawImage(video, 0, 0, video.width, video.height); }

Third step: build the blend mode difference

Create a helper function to ensure that the result of the substraction is always positive. You can use the built-in functionMath.abs, but I wrote an equivalent with binary operators. Most of the time, using binary operators results in better peformance. You don’t have to understand it exactly, just use it as it is:

function fastAbs(value) { // equivalent to Math.abs(); return (value ^ (value >> 31)) - (value >> 31); }

Now write the blend mode difference. The function receives three parameters:
  • a flat array of pixels to store the result of the substraction
  • a flat array of pixels of the current webcam stream image
  • a flat array of pixels of the previous webcam stream image
The arrays of pixels are flattened and contain the color channels values of red, green, blue and alpha:
  • pixels[0] = red value
  • pixels[1] = green value
  • pixels[2] = blue value
  • pixels[3] = alpha value
  • pixels[4] = red value
  • pixels[5] = green value
  • and so on…
In the demo, the webcam stream has a width of 640 pixels and a height of 480 pixels. The size of the array is: 640 * 480 * 4 = 1,228,000.
The best way to loop over the array of pixels is to increment the value by 4 (red, green, blue, alpha), meaning there are now 307,200 iterations – much better.

function difference(target, data1, data2) { var i = 0; while (i < (data1.length / 4)) { var red = data1[i*4]; var green = data1[i*4+1]; var blue = data1[i*4+2]; var alpha = data1[i*4+3]; ++i; } }

You can now substract the pixel values of the images. For performance – and it seems to make a big difference – do not perform the substraction if the color channel is already 0 and set the alpha to 255 (0xFF) automatically. Here is the finished blend mode difference function (feel free to optimize!):

function difference(target, data1, data2) { // blend mode difference if (data1.length != data2.length) return null; var i = 0; while (i < (data1.length * 0.25)) { target[4*i] = data1[4*i] == 0 ? 0 : fastAbs(data1[4*i] - data2[4*i]); target[4*i+1] = data1[4*i+1] == 0 ? 0 : fastAbs(data1[4*i+1] - data2[4*i+1]); target[4*i+2] = data1[4*i+2] == 0 ? 0 : fastAbs(data1[4*i+2] - data2[4*i+2]); target[4*i+3] = 0xFF; ++i; } }

I used a slightly different version in the demo to get a better accuracy. I created a threshold function to be applied on the color values. The method either changes the pixel color value to black under a certain limit or to white above the limit. You can also use it as is:

function threshold(value) { return (value > 0x15) ? 0xFF : 0; }

I also made an average between the three color channels, which results in an image with pixels either black or white.

function differenceAccuracy(target, data1, data2) { if (data1.length != data2.length) return null; var i = 0; while (i < (data1.length * 0.25)) { var average1 = (data1[4*i] + data1[4*i+1] + data1[4*i+2]) / 3; var average2 = (data2[4*i] + data2[4*i+1] + data2[4*i+2]) / 3; var diff = threshold(fastAbs(average1 - average2)); target[4*i] = diff; target[4*i+1] = diff; target[4*i+2] = diff; target[4*i+3] = 0xFF; ++i; } }

The result is something like this black and white image.

Figure 3. The blend mode difference results in a black and white image.
Figure 3. The blend mode difference results in a black and white image.

Fourth step: blend canvas

The function to blend the images is now ready, you just need to send the right values to it: the arrays of pixels.
The JavaScript drawing API provides a method to retrieve an instance of an ImageData object. This object contains useful properties, such as width and height, and also the data property that is the array of pixels you need. Also, create an empty ImageData instance to receive the result, and store the current webcam image for the next iteration of the time interval. Here is the function that blends and draws the result in a canvas:

function blend() { var width = canvasSource.width; var height = canvasSource.height; // get webcam image data var sourceData = contextSource.getImageData(0, 0, width, height); // create an image if the previous image doesn’t exist if (!lastImageData) lastImageData = contextSource.getImageData(0, 0, width, height); // create a ImageData instance to receive the blended result var blendedData = contextSource.createImageData(width, height); // blend the 2 images differenceAccuracy(,,; // draw the result in a canvas contextBlended.putImageData(blendedData, 0, 0); // store the current webcam image lastImageData = sourceData; }

Fifth step: search for pixels

The final step for this demo is the motion detection using the blended images we created in the previous section.
When preparing the assets, place eight images of the notes of the xylophone to use them as rollovers. Use these note positions and sizes as rectangle areas to retrieve the pixels from the blended image. Then, loop over them to find some white pixels.
In the loop, make an average of the color channels and add the result to a variable. After the loop has performed, create a global average of all the pixels in this area.
Avoid noise or small motions by seeting a limit of 10. If you find a value that is more than 10, consider that something has moved since the last frame. This the motion detection!
Then, play the corresponding sound and show the note rollover. The function looks like this:

function checkAreas() { // loop over the note areas for (var r=0; r<8; ++r) { // get the pixels in a note area from the blended image var blendedData = contextBlended.getImageData( notes[r].area.x, notes[r].area.y, notes[r].area.width, notes[r].area.height); var i = 0; var average = 0; // loop over the pixels while (i < ( / 4)) { // make an average between the color channel average += ([i*4] +[i*4+1] +[i*4+2]) / 3; ++i; } // calculate an average between of the color values of the note area average = Math.round(average / ( / 4)); if (average > 10) { // over a small limit, consider that a movement is detected // play a note and show a visual feedback to the user playSound(notes[r]); notes[r] = "block"; $(notes[r].visual).fadeOut(); } } }

Where to go from here

I hope this brings you some new ideas to create video-based interactions, or to step on a territory that has been owned by Flash developers such as myself for years: video-interactive campaigns based on either pre-rendered videos or webcam streams.
You can learn more information about the getUserMedia API by following the development of the Chrome and Opera browsers, and on the draft of the API on the W3C website.
Creatively, if you are looking for ideas for motion detection or video-based applications, I recommend that you follow the experiments made by Flash developers and motion designers using After Effects. Now that JavaScript and HTML are gaining momentum and new capabilities, there is a lot to learn, try and experiment with the following resources:

Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License+Adobe Commercial Rights
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License. Permissions beyond the scope of this license, pertaining to the examples of code included within this work are available at Adobe.

How to set username and password to Tomcat Manager ?

Its very easy to setup username and password for tomcat-manager.add these lines at the end of tag.

<?xml version=’1.0′ encoding=’utf-8′?>
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements.  See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the “License”); you may not use this file except in compliance with
the License.  You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an “AS IS” BASIS,
See the License for the specific language governing permissions and
limitations under the License.
NOTE:  By default, no user is included in the “manager-gui” role required
to operate the “/manager/html” web application.  If you wish to use this app,
you must define such a user – the username and password are arbitrary.
NOTE:  The sample user and role entries below are wrapped in a comment
and thus are ignored when reading this file. Do not forget to remove
<!.. ..> that surrounds them.

  <role rolename=”manager-gui”/>
  <role rolename=”role1″/>
  <user username=”tomcat” password=”tomcat” roles=”manager-gui”/>
  <user username=”both” password=”tomcat” roles=”manager-gui,role1″/>
  <user username=”role1″ password=”tomcat” roles=”manager-gui”/>


Crop image on the client side with JCrop and HTML5 canvas element

Source from :

Crop image on the client side with JCrop and HTML5 canvas element

Suppose you are working on a nice web application where the user can upload images to, for example, a shop catalogue (mmm… that makes me think on something :p ) but wait… you don’t the catalogue uses the whole image you upload instead a piece of it. So, we need to crop the image.

Click for demo
I won’t go in depth on how to create the whole process, in summary:
  1. User choose an images and app upload to server.
  2. Wep app shows the images to user and allows him/her to crop the desired piece of image.
  3. Crop parameters (x, y, widht, height) are sent to server which responsible to really crop the image.
Here we talk about the step 2: the web page that allows to crop (or choose some image area) and send the parameters to the serve to make the real crop (for example using GD or ImageMagick).
The real magic comes from JCrop, a jQuery plugin to select areas within an image. As I note in the demo, in the JCrop project page there is a demo that makes a preview of the selection using two elements.
Here the we are going to make something very similar but using the HTML5 canvas element (to practice a bit :p).
The first step is to add one img element pointing to the image and one canvas element which will act as the previous:
<img src="./sago.jpg" id="target" alt="Flowers" />
<canvas id="preview" style="width:150px;height:150px;overflow:hidden;">canvas>
Now the code that preview the selected cropped image every time the user updates the selected area:
    onChange : updatePreview,
    onSelect : updatePreview,
    aspectRatio : 1
function updatePreview(c) {
    if(parseInt(c.w) > 0) {
        // Show image preview
        var imageObj = $("#target")[0];
        var canvas = $("#preview")[0];
        var context = canvas.getContext("2d");
        context.drawImage(imageObj, c.x, c.y, c.w, c.h, 0, 0, canvas.width, canvas.height);
As you can see the main parameters are the coordinate fields (variable ‘c’). One way to send it to server is to store in four hidden input filed in a form, and send them on submit.
Looking again to the post I think there are too much words to simply show an easy HTML5 canvas example :)

How to read file using javascript?

Refrence url :-

Sample code :

    reading file