Two fragile image watermarking methods are proposed for image authentication. The first method is based on time-frequency analysis and the second one is based on time-scale analysis. For the first method, the watermark is chosen as an arbitrary nonstationary signal with a particular signature in the time-frequency plane. Experimental results show that this technique is very sensitive to many attacks such as cropping, scaling, translation, JPEG compression, and rotation, making it very effective in image authentication. For the second method, based on a wavelet-domain multiresolution analysis, quantization index modulation (QIM) embedding scheme and arbitrary frequency-modulated (FM) chirp watermarks are used in the implementation. In this blind technique, the original watermark is needed neither for the content integrity verification of the original image nor for the content quality assessment of the distorted image.
Watermarking techniques are developed for the protection of intellectual property rights. They can be used in various areas, including broadcast monitoring, proof of ownership, transaction tracking, content authentication, and copy control . In the last two decades a number of watermarking techniques have been developed [2–10]. The requirement(s) that a particular watermarking scheme needs to fulfill depend(s) on the application purpose(s). In this paper, we focus on the authentication of images. In image authentication, there are basically two main objectives: (i) the verification of the image ownership and (ii) the detection of any forgery of the original data. Specifically, in the authentication, we check whether the embedded information (i.e., the invisible watermark) has been altered or not in the receiver side.
Fragile watermarking is a powerful image content authentication tool [1, 7, 8, 11]. It is used to detect any possible change that may have occurred in the original image. A fragile watermark is readily destroyed if the watermarked image has been slightly modified. As an early work on image authentication, Friedman proposed a trusted digital camera, which embeds a digital signature for each captured image . In , Yeung and Mintzer proposed an authentication watermark that uses a pseudorandom sequence and a modified error diffusion method to protect the integrity of the images. Wong and Memon proposed a secret and a public key image watermarking scheme for authentication of grayscale images . A secure watermark based on chaotic sequence was used for JPEG image authentication in . A statistical multiscale fragile watermarking approach based on a Gaussian mixture model was proposed in . Many more fragile watermarking techniques can be found in the literature.
Most of the existing image watermarking methods are based on either spatial domain techniques or frequency domain techniques. Only few methods are based on a joint spatial-frequency domain techniques [15, 16] or a joint time-frequency domain techniques [9, 17]. The approach in  uses the projections of the 2D Radon-Wigner distribution in order to achieve the watermark detection. This watermarking technique requires the knowledge of the Radon-Wigner distribution of the original image in the detection process. In , the watermark detection is based on the correlation between the 2D STFT of the watermarked image and that of the watermark image for each image pixel. In [9, 17], the Wigner distribution of the image is added to the time-frequency watermark. In this technique the detector requires access to the Wigner distribution of the original image.
In this paper, we propose two different private fragile watermarking methods: the first one is based on a time-frequency analysis, the other one is based on a time-scale analysis. Firstly, in the time-frequency-based method the fragile watermark consists of an arbitrary nonstationary signal with a particular signature in the time-frequency domain. The length (in samples) of the nonstationary signal, used as a watermark, can be chosen equal up to the total number of pixels in the image under consideration. That is, for a given image size, we are able to embed a watermark signal of size less or equal to samples. For simplicity, and without loss of generality, we consider in the sequel a square image of size and the nonstationary signal of length samples only. The locations of the image pixels used to embed the watermark samples can be chosen arbitrarily. In what follows, we choose to embed the watermark in the diagonal pixels of the image. Alternative pixel locations can also be considered. Moreover, a pseudonoise (PN) sequence can be used as a secret key to modulate the watermark signal, making the time-frequency signature harder to perceive or to modify. In the extraction process, not all pixels of the original image are needed to recover the watermark but only those pixels where the watermark has been embedded. Here, these original pixels are inserted in the watermarked image itself. At the receiver, it is assumed that the legal user knows the locations of the watermark samples as well as the locations of the corresponding original pixels and the secret key (if used). If, for any reason, the original pixels are not inserted in the watermarked image, they still need to be known by the legal user for the detection purpose. Once the watermark is extracted, its time-frequency representation is used to certify the original ownership of the image and verify whether it has been modified or not. If the watermarked image has been attacked or modified, the time-frequency signature of the extracted watermark would also be modified significantly, as it will be shown in coming sections.
The second proposed fragile watermarking method, based on wavelet analysis, uses complex chirp signals as watermarks. The advantages of using complex chirp signals as watermarks are manyfold, among these one can cite (i) the wide frequency range of such signals making the watermarking capacity very high and (ii) the easiness in adjusting the FM/AM parameters to generate different watermarks. In this technique, the wavelet transformation decomposes the host image hierarchically into a series of successively lower resolution reference images and their associated detail images. The low resolution image and the detail images including the horizontal, vertical, and diagonal details contain the information to reconstruct the reference image of the next higher resolution level. The detection does not require the original image, instead it uses the special feature of the extracted complex chirp watermark signal for content authentication.
Before concluding this section, we should observe that due to its inherent hierarchical structure, the wavelet-based watermarking method provides a higher level of security, and a more precise localization of any tampering (that may occur) in the watermarked image. On the other hand, the advantage of the time-frequency-based watermarking method, compared to the proposed time-scale one, lies in its simplicity and its possibility to use a larger class of nonstationary signals as watermarks.
The paper is organized as follows. In Sections 2 and 3, we give a brief review of time-frequency analysis, introduce the time-frequency based watermarking method, and discuss its performance through some selected examples. In Section 4, we present a brief review of the discrete wavelet transform and introduce the wavelet based watermarking method. In Section 5, we discuss the performance of the second method through two applications: the content integrity verification with tamper localization capability and the quality assessment of the watermarked image. Section 6 concludes the paper.
2. Method I: Proposed Fragile Watermarking Based on Time-Frequency Analysis
2.1. Brief Review of Time-Frequency Analysis
A given signal can be represented in many ways; however, the most important ones are time and frequency domain representations. These two representations and their related classical methods such as autocorrelation and/or power spectrum proved to be powerful in the analysis of stationary signals. However, when the signal is nonstationary these methods fail to fully characterize it. The use of the joint time-frequency representation gives us a better understanding in the analysis of nonstationary signals. The ability of the time-frequency distribution to display the spectral contents of a given nonstationary signal makes it a very powerful tool in the analysis of such signals . As an illustration, let us consider the analysis of a nonstationary signal consisting of a quadratic frequency modulated (FM) signal given by
where is for and zero elsewhere. , and are real coefficients. The signal spectrum, displayed in the bottom plot of Figure 1, gives no indication on how the frequency of the signal is changing with time. The time domain representation, displayed in the left plot of Figure 1, is also limited and does not provide full information about the signal. However, a time-frequency representation, displayed in the center plot of the same figure, clearly reveals the quadratic relation between the frequency and time.
Figure 1. Time-frequency representation of a quadratic FM signal: the signal's time domain representation appears on the left, and its spectrum on the bottom.
Note that, theoretically, we have an infinite number of possibilities to generate a quadratic FM. This could be accomplished by just choosing different combinations of values for , and . In the sequel, we will select a particular quadratic FM signal, with arbitrary start and stop times, as a watermark for our application. We emphasize here that other nonstationary signals are also feasible to choose and select.
2.2. Watermark Embedding and Extraction
As stated earlier, we can select one nonstationary signal, out of an infinite number, as our watermark. It is the particular features of this signal in the time-frequency domain that would be used to identify the watermark and, consequently, its ownership. In discrete-time domain, the selected watermark signal can be written as
Here, we assume a unit sampling frequency. In what follows, we set the signal length equal to where we assume, for simplicity, that is the size of the image to be watermarked. In Figure 2, we display the original unwatermarked baboon image used in our analysis. Any arbitrary pixels (out of the total pixels) of the image are potential candidates to hide the watermark. In this presentation, we have chosen the main diagonal, from top left to bottom right, pixels as the points of interest. That is, each sample of the quadratic FM watermark signal is added to a diagonal image pixel. Note that if we choose to use the secret key, the watermark signal is first multiplied by the PN sequence and, then, added to the original diagonal pixels. Also note that in some cases, the watermark signal may have to be scaled by a real number before it is added to the original pixels. However, in our examples, we have found that a unitary scale coefficient is adequate to perform the task. The watermarked image is displayed in Figure 3. We observe that there is no apparent difference between the marked and unmarked images. In addition, the watermark is well hidden and unnoticeable.
We stress again that (i) the number of image pixels used to embed the watermark signal samples, and (ii) their locations in the original image can be chosen arbitrarily. Indeed, we can choose to embed all image pixels by just selecting an equal number of samples for the watermark signal. However, this number and the corresponding pixels locations used must be known to the legal user of the data.
To extract the watermark, we need to remove the quadratic FM samples from the diagonal pixels of the watermarked image. For that, we need the values of the original image pixels at those particular positions. These original pixels should be known to a legal user. They could be transmitted independently or they can be transmitted in the watermarked image itself. For instance, in the watermarked image in Figure 3, we have inserted these original pixels in the watermarked image. We have done this by augmenting the watermarked image to an image of size and allocated the upper diagonal whose elements are indexed by , to contain the required original pixels. Obviously, any other locations (in the watermarked image) can alternatively be used to insert the original pixels. Similarly, if the PN sequence is used, it should also be known to the legal user at the receiving end in order to extract the watermark. This sequence can also be transmitted independently or hidden in the watermark itself (using a similar procedure to the one used for the needed original pixels). Once, we have extracted the watermark samples, we use a time-frequency distribution (TFD) to analyse their content.
In the literature, we can find many TFDs. The choice of a particular one depends on the specific application at hand and the representation properties that are suitable for this application. Since we select a monocomponent quadratic FM signal as the watermark (refer to Figure 1), thus, we can clearly and unambiguously recognise our time-frequency signature by simply using a windowed Wigner-Ville distribution (WVD) of the signal. The windowed WVD is defined as 
where is the analytic signal associated with the watermark signal and is the considered window. If we decide to use a more complex watermark signal such as the multicomponent signal displayed in Figure 4, the WVD would not be appropriate as it would have cross-terms which might hide the actual feature of our signature. In this case, a reduced interference TFD is more appropriate to use [19, 20]. The watermarking procedure used for multicomponent signals is similar to that used for monocomponent signals. Consequently, one can select any arbitrary pattern in the time-frequency domain as a signature without any additional computational load compared to the illustrative quadratic FM signal used in our examples.
Figure 4. Reduced-interference distribution of a multicomponent signal consisting of 2 quadratic FM components (with opposite instantaneous frequencies).
3. Results and Performance for Method I
In this section, we evaluate the performance of the proposed fragile watermarking method. For that, we consider the time-frequency analysis of the extracted watermark when the watermarked image has been subjected to some common attacks such as cropping, scaling, translation, rotation, and JPEG compression.
For the cropping, we choose to crop only the first row of pixels of the watermarked image (leaving all the other rows untouched); for the scaling we choose the factor value 1.1; for the translation we choose to translate the whole watermarked image by only 1 column to the right; for the rotation we rotate the whole watermarked image by anticlockwise; for the compression we choose a JPEG compression at quality level equal to 99%. Visually, the effect of these attacks on the watermarked image is unnoticeable. This is because the chosen values are very close to the values 1 (i.e., no scaling), 0 (i.e., no translation), 1 (i.e., slight rotation), and 100% (i.e., no compression). For space limitations, the various attacked watermarked images are not shown here (they look very similar to the unattacked watermarked image displayed in Figure 3).
Before presenting the results that correspond to the images subjected to attacks, let us present here the TFD of the extracted watermark when there has been no attack. In Figure 5(a), we display the TFD of the extracted watermark when the PN has not been dealt with yet and in Figure 5(b) we display the TFD of the extracted watermark after we decode the watermark using the correct PN code. It is clear from these two figures that any attempt by an illegal user to identify the owner of the image from the TFD without knowing the correct PN code (i.e., the secret key) would not be possible.
Figure 5. TFDs of the extracted watermark with no attack: (a) before removing the PN effect and (b) after removing the PN effect.
In the following examples, we have not used the PN sequence in the watermarking process in order to focus on the effects of the attacks only (we obtained similar results when the PN is used). From each attacked image, we extract the watermark signal, as discussed in the previous section, and analyze it using a windowed WVD. The results of this operation are shown in Figure 6. These TFDs are drastically distorted in comparison with the TFD of the watermark signal extracted from the unattacked watermarked image (see Figure 5(b)).
Figure 6. TFDs of the extracted watermark for (a) a JPEG compression attack, (b) a scaling attack (factor 1.1) (c) a translation attack, (d) a rotation attack ( rotation), and (e) a cropping attack.
Although the plots in Figure 6 show the visual impact of the considered attacks on the watermark time-frequency representations, they do not quantify the amount of distortion caused to the watermark or image. To quantify the distortion, we need to evaluate the similarity, expressed in terms of the normalized correlation coefficient, , between the TFD of the extracted watermark and that of the original watermark. We define this normalized correlation coefficient as
where is obtained by reshaping the 2D TFD of the original watermark into a 1D sequence from which we remove its mean value. is obtained in a similar way from the TFD of the extracted watermark. is the total number of time-frequency points in the respective TFDs under consideration. The value of belongs to the interval [1,1], and is equal to unity if the TFD of the extracted watermark and that of the original watermark are exactly the same. Table 1 displays the values of that correspond to the attacks considered earlier. These values are quite low, indicating that the proposed watermarking scheme is very sensitive to the small changes that may result from various types of attacks.
Table 1. Similarity measure between the TFD of the original watermark and that of the extracted watermark, when the watermarked image is subjected to various attacks.
It is worth observing that any attack on the watermarked image that (i) does not affect any of the pixels where the watermark signal is embedded and, in addition, (ii) does not result in the relocation of any of these embedded pixels from its original position when it was watermarked, will not be detected at the receiver end. However, this situation can be easily avoided by increasing the watermark nonstationary signal length to watermark a larger number of the original image pixels. As stated above, the length of the watermark signal can be chosen equal up to the total number of the pixels of the unwatermarked original image.
4. Method II: Proposed Fragile Watermarking Based on Time-Scale Analysis
In this proposed fragile multiresolution watermarking scheme a complex FM chirp signal will be embedded, using a wavelet analysis, in the original image.
A discrete wavelet transform is used to decompose the original image into a series of successively lower resolution reference images and their associated detail images. The low-resolution image and the detail images, including the horizontal, vertical, and diagonal details, contain the information needed to reconstruct the reference image at the next higher resolution level.
4.1. Brief Review of the Discrete Wavelet Transform (DWT)
The two-dimensional DWT, of a dyadic decomposition type, considered here is given by 
where represents the low-pass filter, the high-pass filter, the DWT decomposition level, and the input image with .
Figure 7 illustrates a two-level wavelet decomposition of Lena image. Here, (LL) represents the low frequency band, (HH) the high frequency band, (LH) the low-high frequency band, and (HL) the high-low frequency band. For image quality purpose, the frequency bands (LL) and (HH) are not suitable to use in the watermarking process .
Figure 7. A two-level wavelet decomposition of the Lena image.
4.2. Proposed Multiresolution Watermark Embedding Scheme
Figure 8 displays a block diagram of the proposed multiresolution watermarking technique. The various steps of this technique are described below.
Figure 8. The block diagram of the proposed wavelet-based watermarking technique.
Step 1 (discrete wavelet transform of the original image).
A level (in the following analysis we use = 3) DWT of the original image is performed using Harr bases. The obtained wavelet coefficients are denoted as .
Step 2 (generation of the watermark bits).
Every value of the real part, , and every value of the imaginary part, , of the unitary amplitude watermark complex sample, , is quantized into an integer value from 0 to 127. Each of the quantization values is digitally coded using a 7-bit digital code.
Specifically, a given real part value , is digitally coded into a 7-bit code labeled , where represents one of the 7 digit positions in this 7-bit code (i.e., takes of the values from 1 to 7). In a similar way, a given imaginary part value , is digitally coded into a 7-bit code labeled
Step 3 (generation of the key).
A random sequence is generated and used to randomly select the various image pixels to be used in the watermarking process.
Step 4 (procedure to embed the watermark).
The embedding of a particular watermark bit 0 or 1 is based on the QIM quantization technique . To elaborate more, let us denote the th level wavelet coefficient of the original image as , where the subscript stands for horizontal detail coefficient, stands for vertical detail coefficient, , and are the indices of the spatial location under consideration. Note that for an image of size , for , whereas for = 2 and for = 3, respectively. In order to embed a watermark sample consists of a real part and an imaginary part with bits, we consider an image block of size . An illustrative example is shown in Figure 9 to embed 7 bits of both the real and imaginary parts of a watermark sample at different levels. As we see in Figure 9, the first bit or the most significant bit (MSB) is embedded in the third level ( = 3), second and third bits are embedded in the second level ( = 2) and the last four bits are used to embed at level one ( = 1). The HL and LH bands are selected for watermark embedding as illustrated in Figure 9 and the corresponding wavelet coefficient is mapped into a value 0 or 1, according to the quantization function given by  (refer to Figure 10 for a graphical illustration)
where is a pre-selected quantization step. In practice, the quantization step needs to be adjusted according to the requirements of the image quality. Smaller values of result in higher peak signal-to-noise ratio (PSNR) of the watermarked image and consequently, the higher image quality. Lastly, the watermarked wavelet coefficients are obtained in the following way. If then no change in this wavelet coefficient is necessary; that is, the watermarked wavelet coefficient is
If , the wavelet coefficient is then shifted to its nearest neighboring quantization step as given by
where the operation "round()" is to round the element to the nearest integer towards positive infinity. The watermarked wavelet coefficients are then dispersed using the generated key.
Figure 9. A pair of clusters of wavelet coefficients for embedding a pair of th watermark samples of and , .
Figure 10. The quantization procedure of a given wavelet coefficient.
Step 5 (inverse wavelet transform).
The final watermarked image is obtained by an inverse DWT of , using Harr bases.
4.3. An Illustrative Example
To illustrate the validity of the above proposed method, we consider to watermark a Lena image. In this example, we use a level 3 DWT. The quantization steps selected here are the same as those used in . Specifically, we set = 16, 8, 4 for = 3, 2, 1, respectively. The result of the operation is displayed in Figure 11.
Figure 11. Watermark embedding example: (a) unwatermarked Lena image and (b) watermarked Lena image (PSNR = 45.97 dB).
We recall here that the quality of the watermarked image depends on the choice of the quantization step . The smaller the value of , the higher the PSNR of the watermarked image . For an original image, and its watermarked image, , with 255 gray levels, the PSNR is defined as 
In our Lena example, the PSNR of the watermarked image displayed in Figure 11(b) is found to be equal to 45.97 dB.
4.4. Watermark Extraction and Performance Against Attacks
4.4.1. Watermark Extraction Procedure
This section presents the procedure to extract the watermark at the receiver end. We observe that the extraction procedure is blind. That is, neither the original unwatermarked image nor the original watermark are required in the extraction and verification stages. However, the legal user needs to know the key used in the random permutation for the embedding locations, the wavelet type, the values of the quantization parameter , and the quantization function .
Figure 12 displays a block diagram of the watermark extraction and verification procedure. The various steps of this procedure are outlined below.
Figure 12. A block diagram illustrating the watermark extraction and verification procedure.
Step 1 (DWT of the received image).
The received image denoted as , could be the watermarked image or the watermarked image altered by attacks. A level (the same as that used in the embedding process) DWT of the received image is performed using Harr bases. The resulting wavelet coefficients are denoted as .
Step 2 (Extraction of the watermark bits).
Based on the watermark embedding locations provided by the key, each of the wavelet coefficients, obtained in Step 1, is quantized into the symbol "0" or "1", using the same quantization function employed during the embedding process, namely, (6). The extracted watermark bits are, then, extracted from odd and even quantization of the above found wavelet coefficients , according to
where and are the extracted real part and imaginary part of the complex watermark signal sample at time instant .
The extracted watermark bits are used to reconstruct the original watermark sample, (), in the following way:
Without resorting to the original watermark, the image content authentication can be performed by simply evaluating the magnitude of the extracted chirp watermark signal. This magnitude should be constant and equal to unity since our original watermark is an FM complex chirp signal with magnitude that is equal to one.
4.4.2. Performance Against Attacks
Here, we investigate the sensitivity of the proposed watermarking scheme for the following attack scenarios:
(i)JPEG compression of quality factors 90%, 80%, 70%, 60%, 50%, and 40%;
(ii)histogram equalization (uniform distortion);
(iii)sharpening (low-pass filtering)—processed by Adobe Photoshop 7.0;
(iv)blurring (high-pass filtering)—processed by Adobe Photoshop 7.0;
(v)additive Gaussian noise (variance = 0.01);
(vi)Salt-and-pepper noise (This type of noise is typically seen on images with impulse noise model and represents itself as randomly occurring white and black pixels with value set to 255 or 0, resp.).
Specifically, we evaluate the performance of the proposed watermarking technique by considering the extraction of the watermark from the watermarked Lena image in Figure 11(b), when subjected to each of the above attacks. The performance is measured in terms of the bit-error-rate (BER) of the extracted watermark bits, and is defined as
where is the number of bits in error and is the total number of watermark bits used in the watermarking process.
In our Lena example, we used a level 3 DWT; consequently, the BER of the extracted watermark of all three wavelet decomposition levels are evaluated. In Table 2 we provide the obtained BER values for the different JPEG compressions attacks, and in Table 3 we provide the BER values that correspond to the other types of attacks.
Table 2. Bit error rate (BER) values of the extracted watermarks obtained for the JPEG compression attacks for various values of the quality factor (QF), and at each DWT level .
Table 3. Bit error rate (BER) values of the extracted watermarks obtained for other types of attacks, and at each DWT level .
In addition, we have evaluated the PSNR of the distorted watermarked image for each of the attacks stated above. The results are summarized in Table 4.
Table 4. Peak signal-to-noise ratio (PSNR) values (in dB) of the distorted watermarked Lena image when subjected to various attacks.
We note that the watermark embedded in a higher decomposition level (low frequency band) has better resistance against distortions. Also, note that the embedded watermark can be fully recovered without any bit error when there is no attack.
5. Performance Study for Method II
In this section we demonstrate the performance of the wavelet-based watermarking method through two applications. In the first application we study the content integrity verification with localization capability. In the second application, we study the quality assessment of the watermarked content by investigating the extracted complex chirp watermark in the absence of the original watermark.
5.1. Content Integrity Verification without Resorting to the Original Watermark
Here we present how to check the integrity of the watermarked image content, and how to localize any tamper in the image, without knowing the original watermark. Specifically, our aim is to detect and locate any malicious change, such as feature adding, cropping, and replacement that may have occurred in the watermarked image. The detection is performed by simply extracting the watermark complex chirp signal and, then, evaluating its magnitude. Recall that this magnitude should be constant and equal to unity if the watermarked image has not been subjected to any attack.
As an illustration, consider a Lena image of 256 256 pixels. The Lena image is virtually partitioned into blocks of size 16 8 pixels each. The resulting 512 blocks are labeled from 1 to 512 in a columnwise order, as shown in Figure 13. The watermark complex signal length is chosen equal to 512 samples. Each of these is embedded (using our proposed scheme) in one of the 512 image blocks; whereby, the upper 8 8 pixels of the block are used to embed the sample real part and the lower 8 8 pixels of the block are used to embed the sample imaginary part. Note that, for simplicity and illustrative purpose, we assume here that no random permutation key is used.
Figure 13. Virtual partitioning of a Lena image of size 256 256 pixels into blocks of size 16 8 pixels each, and indexed from 1 to 512 in a columnwise order.
If no alteration occurs in the watermarked image, the detector after processing the image by blocks of size 16 8 pixels each, would yield for each block a watermark sample of magnitude almost equal to one. Figure 14 displays the result of the detection operation for our example. As expected, the magnitude of each sample is approximately equal to unity.
Figure 14. Magnitudes of watermark samples obtained for each of the 512 blocks, when no alteration occurs in the received watermarked image.
Now we assume that the Lena image has been subjected to an attack. First, we consider that the attack has occurred in one single block. Then, we generalize the assumption to multiple blocks.
5.1.1. Tamper in a Single Authentication Block
Here, we assume that the watermarked Lena image is altered in only one pixel. Specifically, we assume that the value of the pixel located at (135, 138) has been changed from 196 to 0, as shown in Figure 15.
Figure 15. A tampered watermarked Lena image. The pixel value located at (135, 138) is set to 0 (indicated by a black dot in the left eye region).
The pixel under consideration belongs to the 16 8 authentication block with index 281, as illustrated in Figure 16.
Figure 16. Position of the altered pixel in the watermarked image.
The detector response in this case presents a magnitude value different from unity at the block index 281, as shown in Figure 17. This is an indication that an alteration has occurred at this specific block location of the watermarked image.
Figure 17. Magnitudes of watermark samples obtained for each of the 512 blocks, when an alteration occurs in one block of the received watermarked image.
5.1.2. Tampers in Multiple Authentication Blocks
Here, we assume that the watermarked Lena image is altered in more than one authentication blocks.
Assume that the mouth region of the watermarked Lena image has been deliberately replaced by a different mouth image. The result of this operation is shown in Figure 18(a). Figure 18(b) displays the region (i.e., the mouth region) where the alteration occurred.
Figure 18. (a) A tampered watermarked Lena image, and (b) the region around the mouth indicates where the alteration occurred (a).
As we can see, it is difficult to pinpoint, at the naked eye, to the exact block locations where the alteration occurred in Figure 18(a). However, our detector as can be seen in Figure 19(a), is able to indicate the indexes of all ten authentication blocks that are in error. These block indexes, given by 267, 268, 283, 284, 299, 300, 315, 316, 331, and 332, exactly match the indexes of the blocks that we deliberately modified earlier. The positions and indexes of the altered blocks are shown in Figure 19(b).
5.2. Quality Assessment of the Proposed Method II
In this section, we discuss the quality assessment of the received watermarked image, when subjected to various attacks. Ideally, the magnitude of each extracted watermark sample is equal to unity; however, in practice, the actual value is different from one due to the possible manipulations of the watermarked image content. This point is well illustrated in Figure 20.
Figure 20. Ideal and actual magnitudes of the extracted watermark signal.
We evaluate the level of distortion of the attacked watermarked image by evaluating the mean square error (MSE) between the actual magnitude of the extracted watermark signal and its original value (i.e., unity). Mathematically, the MSE is computed as
where denotes the magnitude of the th extracted watermark sample, and is the number of watermark samples embedded in the image.
Equivalently we can evaluate, in (dB), the quality measure of the distortion, in terms of the signal-to-noise ratio (SNR) as follows:
Note that the further is the extracted watermark signal from the original watermark one, the larger is the value of the MSE, and, consequently, the smaller is the value of the SNR.
Table 5 summarizes the results, when the watermarked Lena image (refer to Figure 11) is subjected to JPEG compression for various quality factor values. As expected, we observe that the MSE increases (i.e., SNR decreases) with decreasing quality factor values.
Table 5. Quality assessment using the mean square error (MSE) and the signal-to-noise ratio (SNR) of the extracted watermark signal, when the watermarked Lena image is JPEG compressed, using various compression quality factor values.
In the same table, we also show the PSNR values obtained in this case. These values confirm the degradation of the attacked image with decreasing JPEG quality factor.
We note that the MSE obtained for the JPEG compression quality factor 100% (i.e., no attack) is nonzero. This is due to the quantization noise (refer to earlier sections), and can be reduced by reducing the quantization step used in the watermarking procedure.
Table 6 summarizes the results when the image is subjected to other attacks. The corresponding PSNR values (in dB) for these attacks were already given in Table 4. We note that the amount of image content degradation increases with increasing MSE values (i.e., decreasing SNR values).
Table 6. Quality assessment using the mean square error (MSE) and the signal-to-noise ratio (SNR) of the extracted watermark signal, when the watermarked Lena image is altered by various attacks.
In this paper, we proposed two fragile watermarking methods for still images. The first method uses time-frequency analysis and the second one uses time-scale analysis. In the first method, the watermark consists of an arbitrary nonstationary signal with a particular signature in the time-frequency plane. This method can allow the use of a secret key to enhance the security and privacy. To verify the image ownership and to check whether it has been subjected to any attack, we exploit the particular signature of the watermark in the time-frequency domain. The advantages of this method are twofold: (i) we can detect any change that results from an attack such as rotation, scaling, translation, and compression and (ii) the watermarked image quality is retained quite high because only few pixels of the original image are used in the watermarking process. In the second proposed method, an arbitrary complex FM signal is embedded in the wavelet domain. This method was shown to be very effective, in terms of sensitivity of the hidden fragile watermark, when the watermarked image is subjected to various attacks. A nice feature of this second method is that the watermark extraction is performed without the need for the original watermark. Two potential applications are presented to demonstrate the high performance of this proposed method. The first application deals with a content integrity verification without restoring to the original watermark and the second application deals with a blind quality assessment of the received watermarked image.
H Wang, C Liao, JPEG images authentication with discrimination of tampers on the image content or watermark. IETE Technical Review 27(3), 244–251 (2010). Publisher Full Text
S Suthaharan, Fragile image watermarking using a gradient image for improved localization and security. Pattern Recognition Letters 25(16), 1893–1903 (2004). Publisher Full Text
H Yuan, X-P Zhang, Multiscale fragile watermarking based on the Gaussian mixture model. IEEE Transactions on Image Processing 15(10), 3189–3200 (2006). PubMed Abstract
P MeenakshiDevi, M Venkatesan, K Duraiswamy, A fragile watermarking scheme for image authentication with tamper localization using integer wavelet transform. Journal of Computer Science 5(11), 831–837 (2009)
Y Zhang, BG Mobasseri, BM Dogahe, MG Amin, Image-adaptive watermarking using 2D chirps. Signal, Image and Video Processing 4(1), 105–121 (2010). Publisher Full Text
X Zhang, S Wang, Fragile watermarking scheme using a hierarchical mechanism. Signal Processing 89(4), 675–679 (2009). Publisher Full Text
GL Friedman, Trustworthy digital camera: restoring credibility to the photographic image. IEEE Transactions on Consumer Electronics 39(4), 905–910 (1993). Publisher Full Text
PW Wong, N Memon, Secret and public key image watermarking schemes for image authentication and ownership verification. IEEE Transactions on Image Processing 10(10), 1593–1601 (2001). PubMed Abstract | Publisher Full Text
S Stanković, I Djurović, L Pitas, Watermarking in the space/spatial-frequency domain using two-dimensional Radon-Wigner distribution. IEEE Transactions on Image Processing 10(4), 650–658 (2001). PubMed Abstract | Publisher Full Text
S Stankovic, I Orovic, N Zaric, An application of multidimensional time-frequency analysis as a base for the unified watermarking approach. IEEE Transactions on Image Processing 19(3), 736–745 (2010). PubMed Abstract | Publisher Full Text
F Hlawatsch, GF Boudreaux-Bartels, Linear and quadratic time-frequency signal representations. IEEE Signal Processing Magazine 9(2), 21–67 (1992). Publisher Full Text
H Choi, WJ Williams, Improved time-frequency representation of multicomponent signals using exponential kernels. IEEE Transactions on Acoustics, Speech, and Signal Processing 37(6), 862–871 (1989). Publisher Full Text
J Jeong, WJ Williams, Kernel design for reduced interference distributions. IEEE Transactions on Signal Processing 40(2), 402–412 (1992). Publisher Full Text
D Kundur, D Hatzinakos, Digital watermarking for telltale tamper proofing and authentication. Proceedings of the IEEE 87(7), 1167–1180 (1999). Publisher Full Text