ISSN ONLINE(2320-9801) PRINT (2320-9798)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Amazing porn model Belle Delphine nudes on Watch free video collection of Belle Delphine nede leaked

Rare Muslim porn and سكس on Tons of Arab porn clips.

XNXX and Xvideos porn clips free on Best XnXX porn tube channels, categorized sex videos, homemade and amateur porn.

Exlusive russian porn Get uniqe porn clips from Russia

Find out on best collection of Arabain and Hijab سكس

A Review of Image Forgery Techniques

Hardish Kaur, Geetanjali Babbar
Assistant professor, CGC Landran, India.
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Computer and Communication Engineering


Image forgery refer to copying and pasting contents from one image into another image. This procedure is quite common now these days. This process is done to earn money in a wrong way and to hide the originality of the image. This paper focuses on the methods of detection of the image forgery and the aspects of the image forgery . This paper also focuses on the classification methods of the image forgery.




The trustworthiness of photographs has an essential role in many areas, including: forensic investigation, criminal investigation, surveillance systems, intelligence services, medical imaging, and journalism. The art of making image fakery has a long history. But, in today’s digital age, it is possible to very easily change the information represented by an image without leaving any obvious traces of tampering. Despite this, no system yet exists which accomplishes effectively and accurately the image tampering detection task.The digital information revolution and issues concerned with multimedia security have also generated several approaches to digital forensics and tampering detection. Generally, these approaches could be divided into active and passive–blind approaches. The area of active methods simply can be divided into the data hiding approach (e.g., watermarks) and the digital signature approach. We focus on blind methods, as they are regarded as a new direction and in contrast to active methods, they work in absence of any protecting techniques and without using any prior information about the image. To detect the traces of tampering, blind methods use the image function and the fact that forgeries can bring into the image specific detectable changes (e.g., statistical changes).When digital watermarks or signatures are not available, the blind approach is the only way how to make the decision about the trustworthiness of the investigated image. Image forensics is a burgeoning research field and promise a significant improvement in forgery detection in the never–ending competition between image forgery creators and image forgery detectors.The example in Figure shows two digital images. The left image was printed by several news sources in an article about a mysterious giant-sized “hogzilla” [19]. While the authenticity of the image is unknown, with very little skill a “forged” version was digitally created using the computer software Adobe Photoshop. It is very hard, if impossible, for the human eye to detect digital manipulation at face value. This is just one example of the need for a tool to aid in the detection of digital image tampering. The research in this thesis attempts to address this need and provide some insight into this challenging problem
Local Binary Pattern. LBP is a kind of gray-scale imageure operator which is usedfor describing the spatial structure of the image imageure . The imageure T in a localneighborhood of a gray scale image can be defined as the joint distribution of the graylevels of P(P > 1) image pixels using the following equation where p is the total number of pixels in asset and t is the local binary pattern I the same image .
With techniques available to protect an original image from tampering, the reverse scenario raises concern of verifying the authenticity of an image of unknown origin. This is an increasingly important issue as digital cameras come down in price and ease of use of powerful image processing software, i.e. Adobe Photoshop and GIMP (GNU Image Manipulation Program), become more widely available [15]. In fact, GIMP is freely available on the web and is a viable alternative to Adobe Photoshop. Most of the image manipulations discussed in this thesis can be performed using GIMP. With increasing opportunities and ease to digitally manipulate images, the research community has its work cut out.The state of the art in research in digital image forensics currently focuses on digital watermarking and variations of this, as previously discussed. Research conducted on image authentication in the absence of any digital watermarking scheme is still in its infancy stages [9] [12].


A. Support Vector Machine (SVM)

Support vector machine classifier is used to make segments of selected data on the basis of emotions and simple image. Input data is presented in two sets of vectors in n-dimensional space, a separate hyper-plane is constructed in space due to which margin between two data sets maximize.
Kernel Function: During training a user need to define four standard kernels as following. A kernel function use of parameters such as γ, c, and degree that defined by user during training.


Naïve Bayes is used as image classifier because of its simplicity and effectiveness. Simple(“naive”)classifica1onmethodbased on Bayesrule [7]. The Bayes rule is applied on document for the classification of image. The rule which is following is:
This rule is applied for a document d and a class c. probability of A happening given to B can be find with the probability of B given to A. this algorithm work on the basis of likelihood in which probability of document B is same as frequency of words in A.on the basis of words collection and frequencies a category is represented. We can define frequency of word is number of time repetition in document define frequency of that word. We can assume n number of categories from C0 to Cn-1. Determining which category a document D is most associated with means calculating the probability that document D is in category Ci, written P(Ci|D), for each category Ci.
Using the Bayes Rule, you can calculate P(Ci|D) by computing:
P(Ci|D) = ( P(D|Ci ) * P(Ci) ) / P(D)
P(Ci|D) is the probability that document D is in category Ci; in document D bag of words is given by probability, which create in category Ci. P(D|Ci) is the probability that for a given category Ci, the words in D appear in that category.
P(Ci) is the probability of a given category; that is, the probability of a document being in category Ci without considering its contents. P(D) is the probability of that specific document occurring. We can classify image with procedure that required using above discussed parameters is as following:


A bpa neural network (BPANN) is a feed-forward, artificial neural network that has more than one layer of hidden units [7] between its inputs and its outputs. Each hidden unit, j, typically uses the logistic function1 to map its total input from the layer below, xj , to the scalar state, yj that it sends to the layer above.
yj= logistic(xj) =1/1 + e−xj , xj = bj +Σyi w ij
where bj is the bias of unit j, i is an index over units in the layer below, and wij is a the weight on a connection to unit j from unit i in the layer below. For multiclass classification, output unit j converts its total input, xj , into a class probability, pj.


The above paper classifies the ways of image tempering and classification methods and the ways of classification using different classifiers . The paper also describes a comparative study of the Support Vector Machines , Naïve Bayes Classification and the Neural Network Classification methods . The future research workers may use one of the above classification methods or a combination of the above classification methods .

Figures at a glance

Figure 1 Figure 2 Figure 3
Figure 1 Figure 2 Figure 3