ISSN ONLINE(2320-9801) PRINT (2320-9798)
Ashish Kumar Kushwaha1 and Avinash P. Wadhe2 |
Related article at Pubmed, Scholar Google |
Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering
With rapidly growing technology, Video became most important weapon in the fight to against people who break the law by capturing them red handed. Evidence captured on video is considered to be more reliable, more accurate and more convincing than eyewitness testimony alone. But due to growth in multimedia editing software Digital Photographs and videos can be no longer considered ?proof of evidence?. Evidence can be easily tempered using software. So we propose a new forensic framework, Forensic framework for Video Forgeries capable of video Steganalysis and detection of tampering in digital video without any specialized hardware. Framework can be used for the forensic of videos using enhanced forensic techniques and also used to detect any hidden data in the video. The objective of the framework is to provide the easy and handy Video Forensic Framework to forensic community for validation of evidence before presenting it to court for law enforcement
Keywords |
Video Forensics, Video Steganalysis, Forensics Framework |
INTRODUCTION |
Due to availability of low-cost digital and sophisticated video cameras and the availability of video sharing websites such as YouTube, digital videos become most important part in Day to day life. Since videos can be easily manipulated using Available tools, their authenticity cannot be taken for granted. Tampering a digital video not an easy task, it is challenging and time consuming task as compare to still image, but video editing software can be a easy way to manipulate video.Of course not every video forgery is equally consequential; the tampering with footage of a popstar may matter less than the alteration of footage of a crime in progress. But the alterability of video undermines our common sense assumptions about its accuracy and reliability as a representation of reality. As digital video editing techniques become more and more sophisticated, it is ever more necessary to develop tools for detecting video forgery. |
1.1 Video Forgeries |
The movie industry is probably the strongest driving force for improvement of video manipulation technology. With the video editing technology currently available, professionals can easily remove an object from a video sequence, insert an object from a different video source, or even insert an object created by computer graphics software. Certainly, advanced video manipulation technology greatly enriches our visual experience. However, as these techniques become increasingly available to the general public, malicious tampering. Although tampering with video is relatively hard, in recent years we have begun to encounter video forgeries. Growth in video tampering is creating a huge impact on our society. Although currently only a few digital video forgeries have been exposed, such instances are eroding the public trust in video. Therefore, it is urgent for the scientic community to come up with methods for authenticating video recordings. |
1.2 Watermarking |
One solution to video authentication is digital watermarking. There are several types of watermark. Among them, fragile and semi-fragile watermarks can be used to authenticate videos. Fragile watermarking works by inserting imperceptible information that will be altered if there is any attempt to modify the video. Later, the embedded information can be extracted to verify the authenticity of the video. The semi-fragile watermark works in a similar fashion. The difference is that it is less sensitive to classical user modifications such as compression. The assumption is that these modifications do not affect integrity of the video. The major drawback of the watermarking approach is that a watermark must be inserted at precisely the time of recording |
1.3 Forensic framework for Video Forgeries |
Forensic framework is designed in to detect digital forgeries without help of watermarking (Digital authentication), the fundamental assumption behind our techniques is that tampering with a digital video may disturb certain underlying properties of the video and these perturbations can be modelled and estimated in order to detect tampering. we can divide framework in 3 module |
ïÃâ÷ Video Analysis |
ïÃâ÷ Video Forensics |
ïÃâ÷ Video Steganalysis. |
Video Analysis |
In this module we are propose techniques for enchaining video analysis. In this module we propose new advance approaches like |
Video Stabilization: |
This technique removes shaky motion of video and displays the Stabilize version of video. This is used to focus the target in the video easy without any disturbance. This is a important video enhancement technique for video Analysis |
Video Display with Live Histogram: |
This technique displays the live histogram of video sequence. A histogram is a visual way to display frequency data using bars. A feature of histograms is that they show the frequency of continuous data. Histogram block computes the frequency distribution of the elements in each frame by sorting the elements into a specified number of discrete bins. |
Scene Change Detection: This technique detects the major change in video sequence with any frame is added or deleted from the video, and then this technique will identify it. |
Face Detection: This technique automatically detect the face of the suspect in the video sequence and mark face by square box. Face detection and tracking are important in many applications including activity recognition, automotive safety, and surveillance. |
Video Forensic |
De-interlaced. Sometimes, interlaced videos are de-interlaced to minimize âÃâ¬Ãâ¢combing" artifacts. The de-interlacing procedure introduces correlations among the pixels within a frame and between frames. Tampering, however, is likely to destroy these correlations |
Frame Duplication Detection Techniques for detecting image duplication have previously been proposed. These techniques, however, are computationally too inefficient to be applicable to a video sequence of even modest length. Therefore, we propose new method for detecting video duplications. |
Double MPEG: When an MPEG video is modified, and re-saved in MPEG format, it is subject to double compression. In this process, two types of artifacts - spatialand/or temporal will likely be introduced into the resulting video. These artifacts can be quantified and used as evidence of tampering. |
Re-projection A simple and popular way to create a bootleg video is to simply record a movie from the theater screen. Such a re-projected video usually introduces distortion into the intrinsic camera parameters; the distortion to camera skew in particular is evidence of tampering. |
Frame Forensic Technique is used to perform various forensic techniques on selected frame of video. |
Video Steganalysis |
Steganalysis is the study of detecting messages hidden using steganography, this is analogous to cryptanalysis applied to cryptography. In video Steganalysis, for determination video is stegged (Contain hidden data) or not there are several step are as follows In first step, raw video is converted into a MPEG Format, (Video should not be re-compressed for better result)After that in next step of steganalysis extracts the data from stegged file and create the feature for statistical classifier.This statistical classifier then classify whether frames a stegged in video sequence, if most of frames are stegged then entire video is said to stegged |
Video steganalysis there are 3 steps |
ïÃâ÷ Feature Sets Description |
ïÃâ÷ Frame Classification |
ïÃâ÷ Video Classification |
Each technique focuses on one specific form of tampering and cannot be applied singlehandedly to detect all video forgeries. Using these modules in combination, provide promising beginning to detecting forgery in digital videos without watermarks. |
II. RELATED WORK |
Bestagini et al. in [1] proposed that Video codec identification is an important task while proving the authenticity of video content. Video sequences are usually available in compressed format since the very acquisition. Therefore, being able to detect the adopted coding architecture reveals information about both the possible presence of alterations, and video origin. The author presents two detectors that permit to identify the adopted coding architecture for a given video sequence. The first detector extend the robustness of the idempotent detector permitting an effective detection. The second detector extends the possibilities of the idempotent detector by permitting the identification of coding schemes that are not known by the analyst. Michael Tok, Marko Esche et al. in [2] explains the parametric merge candidate for high efficiency video coding. He present a novel Merge candidate for improving already existing vector prediction techniques based on higher order motion. Simone Milani, Marco Fontani et al. in [3] proposed an overview paper on different video forensics techniques. In this paper he explains that, it is possible to divide video forensic techniques into three areas concerning the acquisition, the compression, and the editing of the video signals. In [4], the author Ghulam Qadir et al. explained SULFA (Surrey University Library for Forensic Analysis) for the bench-marking of video forensic techniques. This new video library has been designed and built for the purpose of video forensics specifically related to camera identification and integrity verification. |
Weihong Wang, Hany Farid in [5] gave a method detecting re-projected video for finding tampering in digital video. He explains projection of video from planer and non planer surface. In [6], they proposed their work for detecting double quantization for detecting digital forgery. In [7], the duplication technique is described. The authors explains two ways of duplication i.e. frame and region duplication. And in [8], they proposed their work for techniques for detecting double MPEG compression in digital video. In digital video, the static and temporal parameters are used to detect tampering. |
For video steganalysis, an early but comprehensive treatment is from Budhia[9]. This work looked at detecting data embedded using additive white Gaussian noise in the spatial domain. By using data from surrounding frames, which they call collusion, an estimation of the current frame is achieved. Several different collusion approaches are tried, including simple linear averaging, weighted averaging and block based reconstruction of reference frames. Block based reconstruction searches for similar blocks in nearby frames and copies them into a new reference frame. The difference of this reference frame and the original is then used to estimate the embedded data. Their features use statistics such as kurtosis, entropy, and 25th percentile over this estimation. They mention that their technique can apply to the DCT domain and test it using two different methods of embedding, though without considering the encoding process (for example, P/B frames). |
A performance enhancement on [9] is proposed by Jainsky in MoViSteg [10] which also uses motion estimation to reconstruct a frame. They employ an asymptotic relative efficiency based detector, which âÃâ¬Ãâ¢is efficient for large samples and weak signalsâÃâ¬Ãâ [10]. The detector uses an adaptive threshold that is based on statistics from sample frames in the video. While they do not give overall accuracy, they report at 60% true positive to 10% false positive rate at 75 dB Peak Signal-to-Noise-Ratio (PSNR). Most recently in [11], B. and F. Liu use collusion with a window of frames limited by a predetermined correlation threshold. They use a simple linear collusion that averages the surrounding frames. While they obtain good results (from 88–100% at 40% embedding, depending on the embedding scheme), the watermarking techniques they test against make very distinctive changes in the DCT values used. Two of them increase the range of values, which will show up in the global histogram. Another simply removes several DCT values in select blocks, which would cause noise in the dual histogram. |
III. PROPOSED WORK |
The video stabilization technique is used works without any previous knowledge it automatically detect the background plane in the video sequence and uses its observed distortion to correct for camera motion. The algorithm used for stabilization is divided into two steps |
1. Determine the affine image transformation between all neighbouring frames of video using estimate Geometric Transformation function. This function is applied to point correspondence between two frames. |
2. Wrap the video frames to achieve a stabilized video |
For this we will used the Computer vision System toolbox. |
The scene change detection technique is divided into two parts |
1. For making the algorithm sensitive to small changes, algorithm finds the edges from the two consecutive video frames |
2. From identified edges, the section of one video frame is compared with another using the Block Processing Block. If the number of different sections exceeds a specified threshold, the example determines that the scene has changed |
The face detection process start with detection of object in our case we are detecting face(we can also configure the technique to detect other object like nose eyes etc).The face detection is done using the vision.CascadeObjectDetector in matlab. The cascade object detector uses the Viola-Jones detection algorithm and a trained classification model for detection |
The Frame duplication detection start with calculation of histogram value of first frame in video sequence, then correlation coefficient is calculate between the calculated histogram value and histogram value of all other remaining frame in the video sequence, this process done for all the frame in the video sequence. After this we compare correlation coefficient this threshold value if this value is greater than threshold this frame is considered to be duplicate |
De-interlacing Interlace videos is a combination of the top and bottom fields in of video sequence. The de-Interlacing of such video is done using line repetition, linear interpolation, or vertical temporal median filtering |
In forensic framework for Video Forgeries have proposed an anti-forensic operation capable of removing the temporal fingerprint that arises in MPEG video sequences when frames are added or deleted followed by recompression. We have identified properties of the temporal fingerprint and used these to model the effect of frame deletion or addition on the P-frame prediction error sequence. Our proposed anti-forensic technique operates by selectively increasing the prediction error in certain P-frames of the video so that the P-frame prediction error sequence approximates a targetprediction error sequence obtained using our model. The prediction error in each P-frame is increased by setting the motion vectors of certain macroblocks within that frame to zero, then recalculating the prediction error for the frame. Experimental results demonstrate that our proposed anti-forensic technique is capable of removing the temporal fingerprint from MPEG videos that have undergone frame deletion or addition. |
IV. EXPERIMENTAL RESULTS |
We performed 2 experiments between two Classifiers i.e. L-SVM and proposed ensemble classifier, In the first experiment, we chosen the steganographic algorithm nsF5 using the 548-dimensional CC-PEV features. For complexity issues we used this low dimensionally features set for training ensemble and L-SVM classifier easily |
• LIBLINEAR [17] package is used to implement L-SVM |
• MATLAB is used to implement the Proposed Ensemble classifier used in steganalysis in our framework |
We are using CAMERA database for training and testing purpose. We divide the database in two, one for training and second for testing, we also create set of stego images for payload α ∈ {0.05, 0.1, 0.15, 0.2} bpac (bits per non-zero AC DCT coefficient).Table 3.1 shows the training time and detection accuracy of all two classifiers. We perform on a computer with the Intel core i5 processor running at 2.5 GHz to measure the time. |
Table 3.1 Steganalysis of nsF5 using CC-PEV features using L-SVM and Ensemble |
The performance of two classifiers is very similar. In this experiment we found that time taken by L-SVM for training is 14 to 23 minutes where as time taken by the ensemble classifier is approx 2 minutes only for all payloads. From this experiment it is clear that ensemble is better than L-SVM in training time and detection accuracy. |
. |
Figure 3.1 Graph for Steganalysis of nsF5 using CC-PEV features using L-SVM and Ensemble |
The second experiment is performed to check the computational complexity of L-SVM classifier and ensemble classifier with respect to the training time Ntrn. For this experiment we set the payload to fixed 0.10 bpac and collected the image of JPEG Compressed images of qf (Quality factor) = 75 from BOWS2 [18] and BOSSbase images [19] for training (Ntrn). |
We started experiment with Ntrn = 1000 and perform the training on both classifier and note the training times and increase the Ntrn=2000, 3000 so on till Ntrn= 25,000. Table 3.2 shows the training times for different Ntrn. |
Table 3.2 Dependence of the training time on Ntrn. Target algorithm: nsF5 with 0.10 bpac |
Figure 3.2 Graph of Dependence of the training time on Ntrn. Target algorithm: nsF5 with 0.10 bpac. |
From this experiment this is clear that ensemble Classifier is much faster than L-SVM and work quickly with growing training set size. From both above experiment results, we can conclude the proposed Steganalysis using ensemble classifier in proposed Video Forensic Framework will be better option for video forensic |
V. CONCLUSION AND FUTURE WORK |
The simulation results showed that the proposed algorithm performs better with the total transmission energy metric than the maximum number of hops metric. The proposed algorithm provides energy efficient path for data transmission and maximizes the lifetime of entire network. As the performance of the proposed algorithm is analyzed between two metrics in future with some modifications in design considerations the performance of the proposed algorithm can be compared with other energy efficient algorithm. We have used very small network of 5 nodes, as number of nodes increases the complexity will increase. We can increase the number of nodes and analyze the performance. |
References |
|