Contents

- 1 IMAGE BASED STEGANOGRAPHYUSING LSB INSERTION TECHNIQUE
- 2 ABSTRACT
- 3 INTRODUCTION:
- 4 BACKGROUND HISTORY:
- 5 THEORY:
- 6 LEAST SIGNIFICANT BIT INSERTION
- 7 1. Confidential communication and secret data storing:
- 8 2. Protection of data alteration:
- 9 3. Access control system for digital content distribution:
- 10 4. Media Database systems:
- 11 DIGITAL IMAGE PROCESSING
- 12 BACKGROUND:
- 13 What is DIP?
- 14 What is an image?
- 15 Gray scale image:
- 16 Color image:
- 17 Coordinate convention:
- 18 Image as Matrices:
- 19 Reading Images:
- 20 WRITING IMAGES:
- 21 Print-fno-dfileformat-rresno filename
- 22 Data Classes:
- 23 Name Description
- 24 Image Types:
- 25 Intensity Images:
- 26 Binary Images:
- 27 Indexed Images:
- 28 RGB Image:
- 29 INTRODUCTION TO WAVELETS
- 30 Fourier Analysis
- 31 Short-Time Fourier analysis:
- 32 Wavelet Analysis
- 33 What Can Wavelet Analysis Do?
- 34 What Is Wavelet Analysis?
- 35 Number of Dimensions:
- 36 The Continuous Wavelet Transform:
- 37 The Discrete Wavelet Transform:
- 38 One-Stage Filtering: Approximations and Details:
- 39 Multiple-Level Decomposition:
- 40 Number of Levels:
- 41 Wavelet Reconstruction:
- 42 Reconstruction Filters:
- 43 Relationship of Filters to Wavelet Shapes:
- 44 Wavelet function position
- 45 The Scaling Function:
- 46 INTRODUCTION TO MATLAB
- 47 What Is MATLAB?
- 48 The MATLAB System:
- 49 Development Environment:
- 50 The MATLAB Mathematical Function:
- 51 The MATLAB Language:
- 52 Graphics:
- 53 MATLAB WORKING ENVIRONMENT:
- 54 MATLAB DESKTOP :-
- 55 GETTINH HELP :
- 56 RESULTS:
- 57 CONCLUSIONS:

Steganography is a technique used to hide the message in vessel data by embedding it. The Vessel Data which is visible is known as external information and the data which is embedded is called as internal information.The extrenal information is not much useful to the data owner.

The techniques used in Steganography makes hard to detect hidden message within an image file. By this technique we are not only sending a message but also we are hiding the message. Steganography system is designed to encode and decode a secret file embedded in image file with a random Least Significant Bit(LSB) insertion technique. By using this technique the secret data are spread out among the image data in a random manner with the help of a secret key. The key generates pseudorandom numbers and identifies where and in which order hidden message is laid out. The advantage of using this method is that it includes cryptography. In cryptography, diffusion is applied to secret message.

The information communicated comes in number of forms and is used in various number of applications. In large number of these applications, it is desired that the communication has to be done in secrete. Such secret communication ranges from the obvious cases of bank transfers, corporate communications, and credit card purchases,and large percentage of everyday e-mail. Steganography is an ancient art of embedding a message in such a way that no one,except the sender and the recipient,suspects the existence of the message. Most of the newer applications use Steganography as a watermark, to protect a copy right on information. The forms of Steganography vary, but unsurprisingly, innocuous spam messages are turning up more often containing embedded text. A new transform domain technique for embedding the secret information in the integer wavelet which is transformed on a cover image is implemented here.

A technique which is used to scramble a secrete or a confidential message in order to make it unreadable for a third party is known as the Cryptography.Now-a-days its commonly used in the internet communications.cryptography can hide the content of the message but it cant hide the location of the secrete message.This is how the attackers can target even an encrypted message.Water marking is the another information of hiding the digital data or a picture or musical sound.The main purpose of this watermarking information is to protect the copyright or the ownership of the data.In this technique the robustness of the embedded evidence,that can be very small, is the most important.The external information which is visible is the valuable information in the watermarking technique.

steganography is a technique which is used to make the confidential information imperceptible to the human eyes by embedding the message in some dummy data such as the digital image or a speech sound.There is a research topic about the steganography known as the steganalysis.The main objective of this steganalysis is to find out the stego file among the given files.It is a technique which is used to detect the suspicious image or sound file which is embedded with the crime related information.So,we need to make a "sniffer-dog-program" to break the steganography.However,it is too difficult to make a program that really works.

All the traditional steganography techniques have very limited information-hiding capacity.They can hide only 10% (or less) of the data amounts of the vessel.This is because the principle of those techniques which were either to replace a special part of the frequency components of the vessel image, or to replace all the least significant bits which are present in a multivalued image with the secrete information.In the new steganography which we are using uses an image as the vesel data, and we need to embed the secrete information in to the bit planes of the vessel.The percentage of information hiding capacity of a true color image is around 50.All the "noise-like" regions in the bit planes of the vessel image can be replaced with the secret data without deteriorating the quality of the image,which is known as "BPCS-Steganography", which stands for Bit-Plane Complexity Segmentation Steganography.

The word Steganography is of Greek origin and means “covered, or hidden writing”. Its ancient origins can be traced back to 440BC.

Steganography is a technique which is used now a days to make confidential information imperceptible to the human eyes by embedding it in to some innocent looking "vessel" data or a "dummy" data such as a digital image or a speech sound.In a multi bit data structure a typical vessel is defined as a color image having Red,Green and blue components in it.By using a special extracting program and a key the embedded information can be extracted,the technique of steganography is totally different from "file deception" or "file camouflage" techniques.

A technique to hide the secrete data in a computer file which almost looks like a steganography is known as a "file deception" or "file camouflage".But actually, it is defined as a trick which is used to disguise a secret-data-added file as a normal file.This can be done as most of the computer file formats have some "don’t-care portion" in one file.For instance if we take some file formats as jpeg,mpeg3 or some word file these looks like the original image,sound or document respectively on the computer.Some of them could have misunderstood that such a trick is a type of Steganography.However,such files can have an extra lengthy file sizes, and they can be easily detected by most of the computer engineers.So, by this we can understand that the file deception is totally different from that of the steganographic techinque which we are discussing here.

Many of the "Steganography software" which is in the market today is based on the file decepetion.If we find a steganography program that increases the output file size just by the amount we have embedded, then the program is obviously a file deception.If there is some secrete data then we should encrypt in such a way that it is not readable for the third party.A solution to Keep secrete information very safe is known as Data Encryption.It is totally based on scrambling the data by using some type of the secrete key.

However,encrypting the data will draw more attention of the people who have not encrypted the data.So, it is very to the owner to know whether the data is encrypted or not.By, this we can know that the encrypting is not enough. There is another solution which is known steganography.

There are two types of data in steganography, one is the secret data that is very valuable and the other is a type of media data "vessel" or "carrier" or "dummy" data.Vessel data is essential, but it is not so valuable.It is defined as the data in which the valuable data is "embedded". The data which is already embedded in the vessel data is called "stego data".By using the stego data we can extract the secret or the valuable data. For embedding and extracting the data we need a special program and a key.

A typical vessel is an image data with Red, Green, and Blue color components present in it in a 24 bits pixel structure. The illustration below shows a general scheme of Steganography.

Steganography is a technique which is used to hide secret data by embedding it in some innocent looking media data like Mona lisa in the above picture.The data which is embedded is very safe because Steganography hides both the content and the location of the secret information.In the media data there are many different methods to embed the data.It is highly impossible to detect which method is used for embedding the data.Steganography can co-operate with cryptography in the sense that it can embed the encrypted secret data and make it much safer.

The most important point in the steganography technique is that the stego data does not have any evidence that some extra data is embedded there.In other way, the vessel data and the stego data must be very similar.The user of the steganography should discard the original vessel data after embedding,so that no one can compare the stego and the original data.

It is also important that the capacity for embedding the data is large.As it is larger it is better.Of all the currently available steganography methods the BPCS method is the best.

One of the most common techniques used in Steganographytoday is called least significant bit (LSB) insertion. This method is exactly what it sounds like; the least significant bits of the cover-image are altered so that they form the embeddedinformation. The following example shows how the letter A can be hidden in the first eight bytes of three pixels in a 24-bit image.

Pixels: (00100111 11101001 11001000)

(00100111 11001000 11101001)

(11001000 00100111 11101001)

A: 01000001

Result: (00100110 11101001 11001000)

(00100110 11001000 11101000)

(11001000 00100111 11101001)

The three underlined bits are the only three bits that were actually altered. LSB insertion requires on average that only half the bits in an image be changed. Since the 8-bit letter A only requires eight bytes to hide it in, the ninth byte of the three pixels can be used to begin hiding the next character of the hidden message.

A slight variation of this technique allows for embedding the message in two or more of the least significant bits per byte. This increases the hidden information capacity of the cover-object, but the cover-object is degraded more, and therefore it is more detectable. Other variations on this technique include ensuring that statistical changes in the image do not occur. Some intelligent software also checks for areas that are made up of one solid color. Changes in these pixels are then avoided because slight changes would cause noticeable variations in the area .While LSB insertion is easy to implement, it is also easily attacked. Slight modifications in the color palette and simple image manipulations will destroy the entire hidden message.

Some examples of these simple image manipulations include image resizing and cropping.

Applications of Steganography :

Steganography is applicable to, but not limited to, the following areas.

- Confidential communication and secret data storing.
- Protection of data alteration
- Access control system for digital content distribution.
- Media Database systems.

The area differs in what feature of the Steganography is utilized in each system.

The “secrecy” of the embedded data is essential in this area.

Historically, Steganography have been approached in this area.Steganography provides us with:

(A).Potential capacity to hide the existence of confidential data.

(B).Hardness of detecting the hidden (i.e., embedded ) data.

(C).Strengthening of the secrecy of the encrypted data.

In practice , when you use some Steganography, you must first select a vessel data according to the size of the embedding data.The vessel should be innocuous.Then,you embed the confidential data by using an embedding program (which is one component of the Steganography software ) together with some key .When extracting , you (or your party ) use an extracting program (another component) to recover the embedded data by the same key (“common key “ in terms of cryptography ).In this case you need a “key negotiation “ before you start communication.

We take advantage of the fragility of the embedded data in this application area.

The embedded data can rather be fragile than be very robust. Actually, embedded data are fragile in most steganography programs.

However, this fragility opens a new direction toward an information-alteration protective system such as a "Digital Certificate Document System." The most novel point among others is that "no authentication bureau is needed." If it is implemented, people can send their "digital certificate data" to any place in the world through Internet. No one can forge, alter, nor tamper such certificate data. If forged, altered, or tampered, it is easily detected by the extraction program.

In this area embedded data is "hidden", but is "explained" to publicize the content.

Today, digital contents are getting more and more commonly distributed by Internet than ever before. For example, music companies release new albums on their Webpage in a free or charged manner. However, in this case, all the contents are equally distributed to the people who accessed the page. So, an ordinary Web distribution scheme is not suited for a "case-by-case" and "selective" distribution. Of course it is always possible to attach digital content to e-mail messages and send to the customers. But it will takes a lot of cost in time and labor.

If you have some valuable content, which you think it is okay to provide others if they really need it, and if it is possible to upload such content on the Web in some covert manner. And if you can issue a special "access key" to extract the content selectively, you will be very happy about it. A steganographic scheme can help realize a this type of system.

We have developed a prototype of an "Access Control System" for digital content distribution through Internet. The following steps explain the scheme.

(1) A content owner classify his/her digital contents in a folder-by-folder manner, and embed the whole folders in some large vessel according to a steganographic method using folder access keys, and upload the embedded vessel (stego data) on his/her own Webpage.

(2) On that Webpage the owner explains the contents in depth and publicize worldwide. The contact information to the owner (post mail address, e-mail address, phone number, etc.) will be posted there.

(3) The owner may receive an access-request from a customer who watched that Webpage. In that case, the owner may (or may not) creates an access key and provide it to the customer (free or charged).

In this mechanism the most important point is, a "selective extraction" is possible or not.

In this application area of steganography secrecy is not important, but unifying two types of data into one is the most important.

Media data (photo picture, movie, music, etc.) have some association with other information. A photo picture, for instance, may have the following.

- The title of the picture and some physical object information.
- The date and the time when the picture was taken.
- The camera and the photographer’s information.

Digital image processing is an area that is characterized by the need for extensive experimental work to establish the viability of the proposed solutions to a given problem. An important characteristic which is underlying in the design of image processing systems is the significant level of testing & the experimentation that normally required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches &quickly prototype candidate solutions generally plays a major role in reducing the cost & time required to arrive at a viable system implementation.

An image is defined as a two-dimensional function f(x, y), where x & y are the spatial coordinates, & the amplitude of function “f” at any pair of coordinates (x, y) is called the intensity or gray level of the image at that particular point. When both the coordinates x and y & the amplitude values of function “f” all have finite discrete quantities, then we call that image as a digital image. The field DIP refers to processing a digital image by the means of a digital computer. A image which is composed of finite number of elements,each element has particular location and value is named as a digital image.These elements are called as pixels.

As we know that vision is the most advanced of our sensor,so image play the single most important role in human perception.However, humans are limited to the visual band of the EM spectrum but the imaging machines cover almost the entire EM specturm,ranging from the gamma waves to radio waves.These can operate also on the images generated by the sources that humans are not accustomed to associating with the image.

There is no agreement among the authors regarding where the image processing stops and other related areas such as the image analysis and computer vision start.Sometimes a difference is made by defining image processing as a discipline in which both the input & output at a process are the images. This is limiting & somewhat artificial boundary.The area which is present in between the image processing and computer vision is image analysis(Understanding image).

There are no clear-cut boundaries in the continuum from the image processing at one end to complete vision at the other end . However, one useful paradigm is to consider the three types of computerized processes in this continuum: low-level, mid-level, & the high-level processes.The Low-level process involves the primitive operations such as image processing which is used to reduce noise, contrast enhancement & image sharpening. A low- level process is characterized by the fact that both the inputs & outputs are images. Tasks such as segmentation, description of an object to reduce them to a form suitable for computer processing & classification of individual objects is the Mid level process on images. A mid-level process is characterized by the fact that the inputs given to the image are generally images but the outputs are attributes extracted from those images. Finally the higher- level processing involves “Making sense” of an ensemble of recognized objects, as in image analysis & at the far end of the continuum performing the cognitive functions normally associated with human vision.

As already defined Digital image processing, is used successfully in broad range of areas of exceptional social & economic value.

An image is defined as a two-dimensional function f(x, y), where x & y are the spatial coordinates, & the amplitude of function “f” at any pair of coordinates (x, y) is called the intensity or gray level of the image at that particular point.

A grayscale image can be defined as a function I (xylem) of the two spatial coordinates of the image plane.

Assume I(x, y)as the intensity of the image at the point (x, y) on the image plane.

I (xylem) takes all non-negative values assume that the image is bounded by a rectangle [0, a] ´[0, b]I: [0, a] ´ [0, b] ® [0, info)

It can be represented by the three functions, as R (xylem) for red, G (xylem) for green andB (xylem) for blue.

An image may be continuous with respect to x and y coordinates of the plane and also in the amplitude.Converting such an image into a digital form requires the coordinates and the amplitude to be digitized.Digitizing the values of the coordinate’s is called sampling. Digitizing the values of the amplitude is called quantization.

The result which is generated by using sampling and quantization is a matrix of real numbers.There are two principal ways to represent the digital images.Assume that an image with function f(x,y) is sampled in such a way that the resulting image has M rows and N columns.then the size of the image is MXN.The values of coordinates (xylem) are the discrete quantites.For the notational clarity and convenience, we can use the integer values for these discrete coordinates. In many of the image processing books, the image origin is defined at (xylem)=(0,0).The values of the next coordinate along with the first row of the image are (xylem)=(0,1).It is very important to keep in our mind that the notation (0,1) is used to signify the second sample along with the first row. It does not mean that these are the actual values of the physical coordinates,when the image was sampled.The figure below shows the coordinates convention. Note that the x ranges from 0 to M-1 and y ranges from 0 to N-1 in integer increments.

The coordinate convention which is used in the toolbox to denote arrays is different from that of the preceding paragraph in two minor ways. Firstly, instead of using (xylem) in the toolbox it uses the notation (race) to indicate the rows and the columns. Note:However,the order of coordinates are the same as in the previous paragraph, in the sense the first element of the coordinate topples, (alb), refers to a row and the second one to a column. The other difference is that the origin of the coordinate system is at (r, c) = (1, 1); r ranges from 1 to M and c from 1 to N in the integer increments.The documentation of the IPT refers to the coordinates. Less frequently toolbox also employs another coordinate convention called spatial coordinates, which uses x to refer to column and y to refer to row. This is the quite opposite of our use of variables x and y.

The discussion which we have done leads to the following representation for a digitized image function:

f (0,0) f(0,1) ……….. f(0,N-1)

f(1,0) f(1,1) ………… f(1,N-1)

f(xylem)= . . .

. . .

f(M-1,0) f(M-1,1) ………… f(M-1,N-1)

The right side of this equation represents a digital image by the definition. Each element which is in this array is called an image element, picture element, pixel or a pel. The terms image or pixel are used throughout the our discussions from now to denote a digital image and its elements.

A digital image can be represented by a MATLAB matrix naturally as :

f(1,1) f(1,2) ……. f(1,N)

f(2,1) f(2,2) …….. f(2,N)

. . .

f = . . .

f(M,1) f(M,2) …….f(M,N)

Where f(1,1) = f(0,0) (note use of a monoscope font to denote the MATLAB quantities). We can see clearly that the two representations are identical, except for the shift in the origin. The notation f(p ,q) denotes that the element is located in row p and the column q. For example f(6,2) denotes that the element is in the sixth row and second column of the matrix f. Typically here we use the letters M and N to denote the number of rows and columns respectively in a matrix. A 1xN matrix is known as a row vector whereas an Mx1 matrix is known as a column vector. A 1×1 matrix is a scalar matrix.

Matrices in the MATLAB are stored in variables with different names such as A, a, RGB, real array etc… All variables in Matlab must begin with a letter and can contain only letters, numerals and underscores. As noted previously,all the MATLAB quantities are written using the mono-scope characters. We use the conventional Roman or italic notation such as f(x ,y), for the mathematical expressions

Using the function imread the images are read into the MATLAB environment. The syntax for this is:

imread(‘filename’)

Format name Description recognized extension

TIFF Tagged Image File Format .tif, .tiff

JPEG Joint Photograph Experts Group .jpg, .jpeg

GIF Graphics Interchange Format .gif

BMP Windows Bitmap .bmp

PNG Portable Network Graphics .png

XWD X Window Dump .xwd

Here filename is a string containing the complete image file(including applicable extensions if any).For example the command line

>> f = imread (‘8. jpg’);

reads the JPEG (in the above table) image chestxray into image array f. Note that the use of the single quotes (‘) is to delimit the string filename. The semicolon at the end of a command line is used for suppressing output in the MATLAB. If the semicolon is not includedthen the MATLAB displays the results of the operation(s) specified only in that line. The prompt symbol(>>) says that it is the beginning of the command line, as it appears in the MATLAB command window.

When in the preceding command line there is no path included in the filename, imread reads the file from current directory and if that fails then it tries to find the file in MATLAB search path. An easy way to read an image from a specified directory is to include a full or relative path to that directory in filename.

For example,

>> f = imread ( ‘E:myimageschestxray.jpg’);

This reads an image from a folder called myimages on the E: drive, whereas

>> f = imread(‘ . myimageschestxray .jpg’);

It reads an image from myimages subdirectory of the current of the current working directory. Current directory window on the MATLAB desktop toolbar displays the MATLAB’s current working directory and provides a simple and a manual way to change it. The table above lists some of the most popular image/graphics formats supported by imread and imwrite.

Function size gives the row and the column dimensions of an image:

>> size (f)

ans = 1024 * 1024

The above function is particularly useful in programming when used in the following form to automatically determine the size of an image:

>>[M,N]=size(f);

The syntax above returns the number of rows(M) and columns(N) present in the image.

On whole the function displays the additional information about an array. For instance ,the statement

>> whos f

gives

Name size Bytes Class

F 1024*1024 1048576 unit8 array

Grand total is 1048576 elements using 1048576 bytes

The unit8 entry shown refers to one of the several MATLAB data classes. A semicolon at the end of a line has no effect ,so normally one is not used.

Displaying Images:

To diplay images on the MATLAB desktop we can use a function called imshow, which has the basic syntax:

imshow(f,g)

Where f is referred as an image array, and g as the number of intensity levels which are used to display it. If g is omitted here ,then by default it takes 256 levels .using the syntax

imshow(f,{low high})

By using the above syntax it displays an image as black all values less than or equal to low and as white all values greater than or equal to high. The values which are in between are displayed as intermediate intensity values using the default number of levels .The final syntax is

Imshow(f,[ ])

Sets the variable low to the minimum value of array f and high to its maximum value. This form of imshow is useful in displaying the images that have a low dynamic range or that have positive and negative values.

Function named “pixval” is used frequently in order to display the intensity values of the individual pixels interactively. This function displays a cursor which is overlaid on an image. As and when the cursor is moved over the particular image with the mouse the coordinates of the cursor position and the corresponding intensity values are shown on the display that appears below the figure window .When working with the color images, the coordinates as well as the red, green and blue components are also displayed. If the left button of the mouse is clicked and then held pressed, pixval displays the Euclidean distance between the initial and the current cursor locations.

The syntax form of interest here is Pixval which shows a cursor on the last image displayed. Clicking the button X on the cursor window turns it off.

The following statements read from a disk an image called rose_512.tif extract basic information about that image and display it using imshow :

>>f=imread(‘rose_512.tif’);

>>whos f

Name Size Bytes Class

F 512*512 262144 unit8 array

Grand total is 262144 elements using 262144 bytes

>>imshow(f)

A semicolon at the end of an imshow line has no effect, so normally it is not used. If another image named g, is displayed using imshow, MATLAB replaces the image which is in the screen with the new image. To keep the first image and output the second image, we use a function figure as follows:

>>figure ,imshow(g)

Using the statement

>>imshow(f),figure ,imshow(g) displays both the images.

Keep in mind that more than one command can be written on a line ,as long as different commands are properly delimited by commas or semicolons. As mentioned, a semicolon is normally used whenever it is desired to suppress screen outputs from a command line.

Suppose that we have just read an image h and find that using imshow produces an image. It is clearly understood that this image has a low dynamic range, which can be remedied for display purposes by using the statement.

>>imshow(h,[ ])

By using the function imwrite images are written to disk ,which has the following basic syntax:

Imwrite (f,’filename’)

With the above syntax, the string which is contained in the filename must include a recognized file format extension.Alternatively, a desired format can be specified explicitly with a third input argument. >>imwrite(f,’patient10_run1′,’tif’)

Or

>>imwrite(f,’patient10_run1.tif’)

In the above example the command writes f to a TIFF file named patient10_run1:

If filename contains no information on the path of the file, then imwrite saves the file in the current working directory.

The imwrite function can have some other parameters depending up on the e file format selected. Most of the work in the following chapter deals either with JPEG or TIFF images ,so we focus attention here on these formats.

More general imwrite syntax which is applicable only to JPEG images is

imwrite(f,’filename.jpg,,’quality’,q)

where q is an integer which is in between 0 and 100(the lower the number higher the degradation due to JPEG compression).

For example, for q=25 the applicable syntax is

>> imwrite(f,’bubbles25.jpg’,’quality’,25)

The image for q=15 has false contouring which is barely visible, but this effect becomes quite applicable for q=5 and q=0.Thus, an expectable solution with some margin for error is to compress all the images with q=25.In order to get an idea of compression achieved and to obtain other image file details, we can use the function imfinfo which has syntax.

Imfinfo filename

Here filename implies the complete file name of the image stored in the disk.

For example,

>> imfinfo bubbles25.jpg

outputs the following information(note that some fields contain no information in this case):

Filename: ‘bubbles25.jpg’

FileModDate: ’04-jan-2003 12:31:26′

FileSize: 13849

Format: ‘jpg’

Format Version: ‘ ‘

Width: 714

Height: 682

Bit Depth: 8

Color Depth: ‘grayscale’

Format Signature: ‘ ‘

Comment: { }

Where size of the file is in bytes. The number of bytes in the original image is simply corrupted by multiplying width by height by bit depth and then dividing the result by 8. The result is 486948.Dividing file size gives the compression ratio:(486948/13849)=35.16.This compression ratio was achieved. While maintaining the image quality consistent with the requirements of the appearance. In addition to obvious advantages in storage space, this reduction allows the transmission of approximately 35 times the amount of uncompressed data per unit time.

The information fields which are displayed by imfinfo can be captured to a so called structure variable that can be for the subsequent computations. Using the receding example and assigning the name K to the structure variable.

We use the syntax >>K=imfinfo(‘bubbles25.jpg’);

To store in to variable K all the information which is generated by command imfinfo, the information which is generated by imfinfo is appended to the structure variable by means of the fields, separated from K by a dot. For example, the height and width of the image are now stored in structure fields K. Height and K. width.

As an illustration, consider following use of structure variable K to commute the compression ratio for bubbles25.jpg:

>> K=imfinfo(‘bubbles25.jpg’);

>> image_ bytes =K.Width* K.Height* K.Bit Depth /8;

>> Compressed_ bytes = K.FilesSize;

>> Compression_ ratio=35.162

Note that the function iminfo was used in two different ways. The first was t type imfinfo bubbles25.jpg at the prompt, which resulted in the information which is being displayed on the screen. The second was to type K=imfinfo (‘bubbles25.jpg’),which resulted in the information which is being generated by imfinfo and stored in K. These two different ways of calling imfinfo are considered as an example of command_ function duality, an important concept that is explained in more detail in the MATLAB online documentation.

More general imwrite syntax is applicable only to tif images has the form

Imwrite(g,’filename.tif’,’compression’,’parameter’,….’resloution’,[colres rowers] )

Where the ‘parameter’ can have one of the following principal values: ‘none’ which indicates no compression, ‘pack bits’ which indicates pack bits compression (the default for non ‘binary images’) and ‘ccitt’ indicates ccitt compression. (the default for binary images).

The 1*2 array [colres rowers]

Contains the two integers that give the column resolution and row resolution in dot per_ unit (the default values). For example, if the dimensions of the image are in inches, colres is in the number of dots(pixels)per inch (dpi) in the vertical direction and similarly for rowers in the horizontal direction. Specifying the resolution by single scalar, res is equivalent to writing [res res].

>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,……………..[300 300])

the values of the vector[colures rows] are determined by multiplying 200 dpi by the ratio 2.25/1.5, which gives 30 dpi. Rather than doing the computation manually, we could write

>> res=round(200*2.25/1.5);

>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,res)

where its argument to the nearest integer.It function round rounds is important to note that the number of pixels was not changed by these commands. Only the scale of the image is changed. The original image 450*450 at 200 dpi is of size 2.25*2.25 inches. The new 300_dpi image is the same, except that is 450*450 pixels are distributed over a 1.5*1.5_inch area. These Processes are useful for controlling the size of an image in a printed document with out sacrificing resolution.

Often it is necessary to exportimages to the disk the way they appear on the MATLAB desktop. This is especially true with the plots .The contents of a figure window can be exported to a disk in two ways. The first one is to use the file pull-down menu which is in the figure window and then choose export. With this option the user can select the location, filename, and the format. More control over export parameters is obtained by using the print command:

Where “no” refers to the figure number in the figure window interest, file format refers to one of the file formats in table above. ‘resno’ is the resolution in dpi, and filename is the name which we wish to assign the file.

If we simply type print at the prompt, MATLAB prints (to the default printer) the contents of the last figure window which is displayed. It is also possible to specify other options with print, such as specific printing device.

Although we work with integers coordinates the values of pixels themselves are not restricted to be integers in MATLAB. Table above list various data classes supported by MATLAB and IPT are representing pixels values. The first eight entries in the table are refers to as numeric data classes. The ninth entry is the char class and, as shown, the last entry is referred to as logical data class.

All numeric computations in MATLAB are done in double quantities, so this is also a frequent data class encounter in image processing applications. Class unit 8 also is encountered frequently, especially when reading data from storages devices, as 8 bit images are most common representations found in practice. These two data classes, classes logical, and, to a lesser degree, class unit 16 constitute the primary data classes on which we focus. Many ipt functions however support all the data classes listed in table. Data class double requires 8 bytes to represent a number uint8 and int 8 require one byte each, uint16 and int16 requires 2bytes and unit 32.

Double Double _ precision, floating_ point numbers the Approximate.

Uint8 unsigned 8_bit integers in the range [0,255] (1byte per Element).

Uint16 unsigned 16_bit integers in the range [0,65535] (2byte Per element).

Uint 32 unsigned 32_bit integers in the range [0,4294967295](4 bytes per element). Int8 signed 8_bit integers in the range[-128,127] 1 byte per element)

Int 16 signed 16_byte integers in the range [32768, 32767] (2 bytes per element).

Int 32 Signed 32_byte integers in the range [-2147483648, 21474833647] (4 byte per element).

Single single _precision floating _point numbers with values

In the approximate range (4 bytes per elements).

Char characters (2 bytes per elements).

Logical values are 0 to 1 (1byte per element).

int 32 and single, required 4 bytes each. The char data class holds characters in Unicode representation. A character string is merely a 1*n array of characters logical array contains only the values 0 to 1,with each element being stored in memory using function logical or by using relational operators.

The toolbox supports four types of images:

- Intensity images;
- Binary images;
- Indexed images;
- R G B images.

Most monochrome image processing operations are carried out using the binary or the intensity images, so our initial focus is on these two types of images. Indexed and RGB colour images.

An intensity image is a data matrix whose values have been scaled in order to represent intentions. When the elements of an intensity image are of class unit8, or of class unit 16, they have integer values in the range [0,255] and [0, 65535] respectively. If the image is of the class double, the values are floating _point numbers. Values of scaled, double intensity images are in the range [0, 1] by the convention.

Binary images have a very specific meaning in MATLAB.A binary image is a logical array with 0s and1s in it.Thus, an array of 0s and 1s whose values are of data class, say unit8, is not considered as a binary image in MATLAB .A numeric array is converted to binary using the function logical. Thus, if A is a numeric array consisting of 0s and 1s, we create an array B using the statement.

B=logical (A)

If A contains elements which are other than 0s and 1s.Use of the logical function converts all nonzero quantities to logical 1s and all entries with value 0 to logical 0s.

Using relational and logical operators also creates logical arrays.

To test if an array is logical we use the I logical function: islogical(c).

If c is a logical array, this function returns a 1.Otherwise returns a 0. Logical array can be converted to numeric arrays using the data class conversion functions.

An indexed image has two components:

A data matrix integer, x.

A color map matrix, map.

Matrix map is an m*3 arrays of class double containing floating_ point values in the range [0, 1].The length m of the map are equal to the number of colors it defines. Each row of map specifies the red, green and blue components of a single color. An indexed images uses “direct mapping” of pixel intensity values color map values. The color of each pixel is determined by using the corresponding value the integer matrix x as a pointer in to map. If x is of class double ,then all of its components with values less than or equal to 1 point to the first row in map, all components with value 2 point to the second row and so on. If x is of class units or unit 16, then all components value 0 point to the first row in map, all components with value 1 point to the second and so on.

An RGB color image is an M*N*3 array of color pixels where each color pixel is triplet corresponding to the red, green and blue components of an RGB image, at a specific spatial location. An RGB image may be viewed as “stack” of three gray scale images that when fed in to the red, green and blue inputs of a color monitor

Produce a color image on the screen. Convention the three images forming an RGB color image are referred to as the red, green and blue components images. The data class of the components images determines their range of values. If an RGB image is of class double the range of values is [0, 1].

Similarly the range of values is [0,255] or [0, 65535].For RGB images of class units or unit 16 respectively. The number of bits use to represents the pixel values of the component images determines the bit depth of an RGB image. For example, if each component image is an 8bit image, the corresponding RGB image is said to be 24 bits deep.

Generally, the number of bits in all component images is the same. In this case the number of possible color in an RGB image is (2^b) ^3, where b is a number of bits in each component image. For the 8bit case the number is 16,777,216 colors

Signal analysts have already at their disposal on an impressive arsenal of tools. Perhaps one of the most well-known of all these is Fourier analysis, which breaks down a signal into constituent sinusoids of different frequencies. Another way to think of the Fourier analysis is as a mathematical technique oftransforming our view of the signal from time-based to frequency-based.

For many signals, Fourier analysis is extremely useful because the signal’s frequency content is of great importance. So why do we need other techniques, like wavelet analysis?

Fourier analysis has a serious drawback. In transforming to the frequency domain, time information is lost. When looking at a Fourier transform of a signal, it is impossible to tell when a particular event took place. If the signal properties do not change much over time — that is, if it is what is called a stationary signal—this drawback isn’t very important. These characteristics are often the most important part of the signal, and Fourier analysis is not suited to detecting them.

In an effort to correct this deficiency, Dennis Gabor (1946) adapted the Fourier transform to analyze only a small section of the signal at a time—a technique called windowing the signal.Gabor’s adaptation, called the Short-Time FourierTransform (STFT), maps a signal into a two-dimensional function of time and frequency.

The STFT represents a sort of compromise between the time- and frequency-based views of a signal. It provides some information about both when and at what frequencies a signal event occurs. However, you can only obtain this information with limited precision, and that precision is determined by the size of the window.

Wavelet analysis represents the next logical step: a windowing technique with variable-sized regions. Wavelet analysis allows the use of long time intervals where we want more precise low-frequency information, and shorter regions where we want high-frequency information.

Here’s what this looks like in contrast with the time-based, frequency-based, and STFT views of a signal:

You may have noticed that wavelet analysis does not use a time-frequency region, but rather a time-scale region. For more information about the concept of scale and the link between scale and frequency, see “How to Connect Scale to Frequency?”

One major advantage afforded by wavelets is the ability to perform local analysis, that is, to analyze a localized area of a larger signal.Consider a sinusoidal signal with a small discontinuity — one so tiny as to bebarely visible. Such a signal easily could be generated in the real world,perhaps by a power fluctuation or a noisy switch.

A plot of the Fourier coefficients (as provided by the fft command) of this signal shows nothing particularly interesting: a flat spectrum with two peaks representing a single frequency. However, a plot of wavelet coefficients clearly shows the exact location in time of the discontinuity.

Wavelet analysis is capable of revealing aspects of data that other signal analysis techniques miss, aspects like trends, breakdown points, discontinuities in higher derivatives, and self-similarity. Furthermore, because it affords a different view of data than those presented by traditional techniques, wavelet analysis can often compress or de-noise a signal without appreciable degradation. Indeed, in their brief history within the signal processing field, wavelets have already proven themselves to be an indispensable addition to the analyst’s collection of tools and continue to enjoy a burgeoning popularity today.

Now that we know some situations when wavelet analysis is useful, it is worthwhile asking “What is wavelet analysis?” and even more fundamentally,

“What is a wavelet?”

A wavelet is a waveform of effectively limited duration that has an average value of zero.

Compare wavelets with sine waves, which are the basis of Fourier analysis.

Sinusoids do not have limited duration — they extend from minus to plus infinity. And where sinusoids are smooth and predictable, wavelets tend to be irregular and asymmetric.

Fourier analysis consists of breaking up a signal into sine waves of various frequencies. Similarly, wavelet analysis is the breaking up of a signal into shifted and scaled versions of the original (or mother) wavelet. Just looking at pictures of wavelets and sine waves, you can see intuitively that signals with sharp changes might be better analyzed with an irregular wavelet than with a smooth sinusoid, just as some foods are better handled with a fork than a spoon. It also makes sense that local features can be described better with wavelets that have local extent.

Thus far, we’ve discussed only one-dimensional data, which encompasses most ordinary signals. However, wavelet analysis can be applied to two-dimensional data (images) and, in principle, to higher dimensional data. This toolbox uses only one and two-dimensional analysis techniques.

Mathematically, the process of Fourier analysis is represented by the Fourier transform:

which is the sum over all time of the signal f(t) multiplied by a complex exponential. (Recall that a complex exponential can be broken down into real and imaginary sinusoidal components.) The results of the transform are the Fourier coefficients F(w), which when multiplied by a sinusoid of frequency w yields the constituent sinusoidal components of the original signal. Graphically, the process looks like:

Similarly, the continuous wavelet transform (CWT) is defined as the sum over all time of signal multiplied by scaled,shifted versions of the wavelet function.

The result of the CWT is a series many wavelet coefficients C, which are a function of scale and position.

Multiplying each coefficient by the appropriately scaled and shifted wavelet yields the constituent wavelets of the original signal:

Scaling

We’ve already alluded to the fact that wavelet analysis produces a time-scale view of a signal and now we’re talking about scaling and shifting wavelets.

What exactly do we mean by scale in this context?

Scaling a wavelet simply means stretching (or compressing) it.

To go beyond colloquial descriptions such as “stretching,” we introduce the scale factor, often denoted by the letter a.

If we’re talking about sinusoids, for example the effect of the scale factor is very easy to see:

The scale factor works exactly the same with wavelets. The smaller the scale factor, the more “compressed” the wavelet.

It is clear from the diagrams that for a sinusoid sin (wt) the scale factor ‘a’ is related (inversely) to the radian frequency ‘w’. Similarly, with wavelet analysis the scale is related to the frequency of the signal.

Shifting

Shifting a wavelet simply means delaying (or hastening) its onset. Mathematically, delaying a function (t) by k is represented by (t-k)

Calculating wavelet coefficients at every possible scale is a fair amount of work, and it generates an awful lot of data. What if we choose only a subset of scales and positions at which to make our calculations? It turns out rather remarkably that if we choose scales and positions based on powers of two—so-called dyadic scales and positions—then our analysis will be much more efficient and just as accurate. We obtain such an analysis from the discrete wavelet transform (DWT).

An efficient way to implement this scheme using filters was developed in 1988 by Mallat. The Mallat algorithm is in fact a classical scheme known in the signal processing community as a two-channel sub band coder. This very practical filtering algorithm yields a fast wavelet transform — a box into which a signal passes, and out of which wavelet coefficients quickly emerge. Let’s examine this in more depth.

For many signals, the low-frequency content is the most important part. It is what gives the signal its identity. The high-frequency content on the other hand imparts flavor or nuance. Consider the human voice. If you remove the high-frequency components, the voice sounds different but you can still tell what’s being said. However, if you remove enough of the low-frequency components, you hear gibberish. In wavelet analysis, we often speak of approximations and details. The approximations are the high-scale, low-frequency components of the signal. The details are the low-scale, high-frequency components.

The filtering process at its most basic level looks like this:

The original signal S passes through two complementary filters and emerges as two signals.Unfortunately, if we actually perform this operation on a real digital signal, we wind up with twice as much data as we started with. Suppose, for instance that the original signal S consists of 1000 samples of data. Then the resulting signals will each have 1000 samples, for a total of 2000.

These signals A and D are interesting, but we get 2000 values instead of the 1000 we had. There exists a more subtle way to perform the decomposition using wavelets. By looking carefully at the computation, we may keep only one point out of two in each of the two 2000-length samples to get the complete information. This is the notion of own sampling. We produce two sequences called cA and cD.

The process on the right which includes down sampling produces DWT

Coefficients. To gain a better appreciation of this process let’s perform a one-stage discrete wavelet transform of a signal. Our signal will be a pure sinusoid with high- frequency noise added to it.

Here is our schematic diagram with real signals inserted into it:

Notice that the detail coefficients cD is small and consist mainly of a high-frequency noise, while the approximation coefficients cA contains much less noise than does the original signal.

You may observe that the actual lengths of the detail and the approximation coefficient vectors are slightly more than half of the length of the original signal. This has to do with the filtering process, which is implemented by convolving the signal with a filter. The convolution “smears” the signal, introducing several extra samples into the result.

The decomposition process can be iterated, with the successive approximations being decomposed in turn, so that a single signal is broken down into many lower resolution components. This is known as a wavelet decomposition tree.

Looking at a signal’s wavelet decomposition tree can yield valuable information.

Since the analysis process is iterative, in theory it can be continued indefinitely. In reality, the decomposition of a signal can proceed only until the individual details consist of a single sample or pixel. In practice, you’ll select a suitable number of levels based on the nature of a signal, or on a suitable criterion such as entropy.

We’ve learned that how a discrete wavelet transforms can be used to analyze or decomposethe signals and the images. This process is called decomposition or analysis. The other half of the theory is how those components can be assembled back into the original signal without the loss of information. This process is called reconstruction, or synthesis. The mathematical manipulation that which effects synthesis is called the inverse discrete wavelet transforms (IDWT). To synthesize a signal in the Wavelet Toolbox, we reconstruct it from the wavelet coefficients:

Where wavelet analysis involves two of them, filtering and down sampling, the wavelet reconstruction process consists of up sampling and filtering. Up sampling is known as the process of lengthening a signal component by inserting zeros between the samples:

The Wavelet Toolbox includes commands like idwt and waverec which are used to perform single-level or multilevel reconstruction respectively on the components of one-dimensional signals. These commands have their two-dimensional analogs, idwt2 and waverec2.

The filtering part of a reconstruction process also bears some discussion, because it is considered as the choice of filters that which is crucial in achieving a perfect reconstruction of the original signal. The down sampling of the signal components performed during the decomposition phase introduces a distortion which is called aliasing. It turns out that by carefully choosing filters for the decomposition and reconstruction phases that are closely related (but not identical), we can just “cancel out” the effects of aliasing.

The low- and the high pass decomposition filters (L and H), together with their associated reconstruction filters (L’ and H’), are used to form a system which is called quadrature mirror filters:

Reconstructing Approximations and Details:

We have seen that it is highly possible to reconstruct our original signal from coefficients of the approximations and details.

It is also possible to reconstruct all the approximations and details themselves from their coefficient vectors respectively.

As an example, let’s consider how we would reconstruct the first-level approximation A1 from that of the coefficient vector cA1. We pass the coefficient vector cA1 through the same process which we have used to reconstruct the original signal. However, instead of combining it with the level-one detail cD1, we now feed in a vector of zeros in place of the detail coefficients

vector:

The above process yields a reconstructed approximation A1, which has a same length as the original signal S and which is a real approximation of it. Similarly, we can reconstruct a first-level detail D1, using the analogous process.

The reconstructed details and the approximations are true constituents of the original signal. In fact, we can find when we combine them that:

A1 + D1 = S

Note that coefficient vectors cA1 and cD1—because they are produced by down sampling and are only half of the length of the original signal — cannot directly be combined to just reproduce the signal.

It is necessary to reconstruct the approximations and details before combining them. Extending this particular technique to the components of a multilevel analysis, we find that the similar relationships hold for all the reconstructed signal constituents.

That is, there are many different ways to reassemble the original signal:

In section of “Reconstruction Filters”, we spoke about the importance of choosing the right filters. In fact, the choice of filters not only determines whether there is perfect reconstruction possible are not , it also determines the shape of the wavelet we use to perform the particular analysis. To construct a wavelet of some practical utility, we seldom start by drawing a waveform. Instead, it usually makes more sense to design an appropriate quadrature mirror filters, and then use them to create the waveform. Let’s see how this is done by focusing on an example.

Consider the low pass reconstruction filter (L’) for the db2 wavelet.

The filter coefficients can be obtained from the dbaux command:

Lprime = dbaux(2)

Lprime = 0.3415 0.5915 0.1585 -0.0915

If we reverse the order of this vector (see wrev), and then multiply every even sample by -1, we obtain the high pass filter H’:

Hprime = -0.0915 -0.1585 0.5915 -0.3415

Next, up sample Hprime by two (see dyadup), inserting zeros in alternate positions:

HU =-0.0915 0 -0.1585 0 0.5915 0 -0.3415 0

Finally, convolve the up sampled vector with the original low pass filter:

H2 = conv(HU,Lprime);

plot(H2)

If we iterate the above process several more times, repeatedly up sampling and convolving the resultant vector with the four-element filter vector Lprime, a pattern begins to emerge:

The above curve begins to look progressively more like the db2 wavelet. This means that a wavelet’s shape is determined entirely by the coefficients of the reconstruction filters. This relationship has profound implications. It means that you cannot choose just any shape, call it a wavelet, and perform an analysis. At least, you can’t choose an arbitrary wavelet waveform if you want to be able to reconstruct the original signal accurately. You are compelled to choose a shape determined by quadrature mirror decomposition filters.

We’ve seen the interrelation of wavelets and the quadrature mirror filters. The wavelet function is determined by using the high pass filter, which also produces the details of a wavelet decomposition.

There is an additional function associated with some of the wavelets, but not all of them. This is the so-called scaling function . The scaling function is very similar to the wavelet function. It is also determined by the low pass quadrature mirror filters, and thus is associated with the approximations of the wavelet decomposition. In the same way that iteratively up- sampling and convolving the high pass filter produces a shape approximating the wavelet function, iteratively up-sampling and convolving the low pass filter produces a shape approximating the scaling function.

Multi-step Decomposition and Reconstruction:

A multi step analysis-synthesis process can be represented as:

This process involves two aspects: breaking up the signal to obtain wavelet coefficients, and reassembling the signal from coefficients. We have already discussed about decomposition and reconstruction at some length. Of course, there is no point in breaking up a signal merely to have the satisfaction of immediately reconstructing it. We may modify the wavelet coefficients before performing the reconstruction step. We perform wavelet analysis because the coefficients thus obtained have many known uses, de-noising and compression being foremost among them. But wavelet analysis is still a new and emerging field. No doubt, many uncharted uses of the wavelet coefficients lie in wait. The Wavelet Toolbox can be a means of exploring possible uses and hitherto unknown applications of wavelet analysis. Explore the toolbox functions and see what you discover.

MATLAB® is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation.

Typical uses include: Math and computation,

Algorithm development

Data acquisition

Modeling, simulation, and prototyping

Data analysis, exploration, and visualization

Scientific and engineering graphics

Application development, including graphical user interface building.

MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar non interactive language such as C or FORTRAN.

The name MATLAB stands for matrix laboratory. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of the art in software for matrix computation.

MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research, development, and analysis.

MATLAB features a family of add-on application-specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others.

The MATLAB system consists of five main parts:

This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path.

This is a vast collection of computational algorithms ranging from elementary functions like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.

This is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both "programming in the small" to rapidly create quick and dirty throw-away programs, and "programming in the large" to create complete large and complex application programs.

MATLAB provides facilities for displaying vectors and matrices as graphs,

as well as annotating and printing these graphs.

It includes high-level functions as well as low-level functions.

high-level functions are for two-dimensional and three-dimensional data visualization,

image processing, animation, and presentation graphics.and

low-level functions that allow you to fully customize the appearance of graphics

as well as to build complete graphical user interfaces on your MATLAB applications.

The MATLAB Application Program Interface (API):

It’s a library that allows you to write C and Fortran programs that interact with MATLAB.

It includes facilities for calling routines from MATLAB (dynamic linking),

calling MATLAB as a computational engine, and for reading and writing MAT-files.

Matlab Desktop is the main Matlab application window.The desktop contains five sub windows

- command window
- workspace browser
- current directory window
- command history window
- figure windows(one or more)

command window:

which are shown only when the user displays a graphic.

The user types MATLAB commands and expressions at the prompt(>>) in command window,

and output of those commands is displayed in same window.

workspace browser:

MATLAB defines the workspace as the set of variables that the user creates in a work session.

The workspace browser shows these variables and some information about them.Double clicking on a variable in the workspace browser launches the array editor,which can be used to obtain information and income instances edit certain properties of the variable.

current directory window:

The current Directory tab which is above the workspace tab shows the contents of the current directory,

whose path is shown in the current directory.

For example in the windows operating system the path might be as follows :C:MATLABWork, indicating that directory “work” is a subdirectory of the main directory “MATLAB”;WHICH IS INSTALLED IN DRIVE C.Click on the arrow in the current directory window shows a list of recently used paths .

click the button to the right of the window allows the user to change the current directory.

MATLAB uses a search path to find M-files and other MATLAB related files , which are oraganized in the computer file system .Any file run in MATLAB must reside in the current directory or in a directory that is on the search path.By default , the files supplied with MATLAB and math work toolboxes are included in the search path.The easiest way to see which directories are on the search path or to add or modify a search path , is to select set path from the File menu the desktop and then use the set path dialog box.

It is good practice to add any commonly used directories to the search path to avoid repeatedly haing the change he current directory.

command history window :

The commands user entered in the command window stores in to the command history window,which also stores present and previous

MATLAB sessions.It also provides option to select previously entered MATLAB commands and re-execut from the command history window by right clicking on a command or sequence of commands

This action launches a menu from which to select various options in addition to execute the commands.

This will be useful to select various options in addition to execute the commands and also very useful when experimenting with various commands in a work session.

USING THE MATLAB EDITOR TO CREATE M-FILES :

The MATLAB editor is both a text editor specialized for creating M-files and a graphical MATLAB debugger.

The editor can appear directly in a window by itself , or it can be a sub window in the desktop.

M-files are denoted by the extension .m ,.The MATLAB editor window has many pull-down menus for tasks such as saving ,

viewing ,and debugging files.Because it performs some simple checks and also uses color to differentate between various

elements of code, this text editor is recommended as the tool of choice for writing and editing M-functions.

To open the editor , type edit at the prompt opens the M-file filename.m in an editor window , ready for editing .

it must be noted that , the file must be in the current directory , or in a directory in the searched path.

MATLAB help browser provides the online help ,it opens as a separate window either by clicking on the question mark symbol(?)

on the desktop toolbar , or by typing help browser at the prompt in the command window.

The help browser is a web browser integrated into MATLAB desktop that displays a Hypertext Markup Language (HTML)

documents.The help Browser consists of two panes ,

- helppane
- display

helppane used to find information and display pane used to view the information .

Self explainary tabs other than navigator pane are used to perform a search operation.

This project describes a technique to embed data in an color image.

Additional features that could be added to the project which includes support for file types other than bitmap

and implementation of other Steganography methods.

However this research work and software package provides a good starting point for anyone who is interested in

learning about Steganography.

The data extracted from the cover image depends on the pixel values of the image.

The will be further developed to hide secret image in cover image.

Our editors will help you fix any mistakes and get an A+!

Get startedWe will send an essay sample to you in 2 Hours. If you need help faster you can always use our custom writing service.

Get help with my paperDidn't find the paper that you were looking for?

We can create an original paper just for you!

What is your topic?

Number of pages

Deadline 0 days left