ISSN ONLINE(2319-8753)PRINT(2347-6710)

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Automatic Face Identification Ofcharacterin Movie by Ecgm

G.Vivitha, E.Revathi, C.Balasubramanian
  1. PG Scholar, Dept of Computer Science and Engineering. P.S.R.Rengasamy college of Engineering, for women, Sivakasi, India.
  2. Assistant Professor, Dept of Computer Science and Engineering. P.S.R.Rengasamy college of Engineering, for women, Sivakasi, India.
  3. Head , Dept of Computer Science and Engineering. P.S.R.Rengasamy college of Engineering, for women, Sivakasi, India.
Related article at Pubmed, Scholar Google

Visit for more related articles at International Journal of Innovative Research in Science, Engineering and Technology

Abstract

Automatic face name identification in videos has drawn significant research interests and led to many interesting applications in video content understanding and organization. Facename identification in videos is more difficult that in images. It is a difficult process due to the huge variation in the appearance of various face expression of particular person in video. In this paper we investigate this problem byintroducing two types of approaches.Error correcting graph matching algorithm for face name identification used to match face graph from video and name graph from script of the video,Sensitivity analyses for co-occurrence relationship for face and name performed by coverage noise and intensity noise.As an application this proposed workis able to create a new experience on face name identification in videos.

Keywords

Character identification, graph edit, Graph matching, sensitivity analysis

INTRODUCTION

Proliferation of movie and TV provides large amount of digital video data. This has led to the requirement of efficient and effective techniques for video content understanding and organization. Automatic video annotation is one of such key techniques. In a movie, characters are the focus centre of interests for the audience. Their occurrences provide lots of clues about the movie structure and content.Large motion, complex background and other uncontrolled conditions make the results of face detection and tracking unreliable. In movies, the situation is even worse.
Annotating characters in the movie and video is called movie character identification Automatic character identification is essential for semantic movie index and retrieval scene segmentation summarization and other applications Character identification, though very intuitive to humans, is a tremendously challenging task in computer vision.In a movie, characters are the focus centre of interests for the audience. Their occurrences provide lots of clues about the movie structure and content.
Face identification in videos is more difficult than that in images. Low resolution, occlusion, no rigid deformations, large motion, complex background and other uncontrolled conditions make the results of face detection and tracking unreliable. In movies, the situation is even worse. This brings inevitable noises to the character identification. The same character appears quite differently during the movie. There may be huge pose, expression and illumination variation, wearing, clothing, even makeup and hairstyle changes. In this paper our focus is on annotating characters in the movie and videos, which is called movie character identification. It proposes a global face-name graph matchingBased framework for movie character identification.
A. Objective
Main objective is to develop an automatic face character identification system that is used to identify the faces of the characters in the video and label them with the corresponding names in the cast list. Character identification is performed with sensitivity analysis of affinity graphs.
image
Without the local time information, the task of characteridentification is formulated as a global matching problembetween the faces detected from the video and the names extractedfrom the movie script. Compared with local matching,global statistics are used for name-face association, which enhancesthe robustness of the algorithms.
Sensitivity analysis is common in financial applications,risk analysis, signal processing and any area where modelsare developed [5]. Good modelling practice requiresthat the modeller provides an evaluation of the confidencein the model, for example, assessing the uncertainties associatedwith the modelling process and with the outcomeof the model itself.
B. Existing Method
Due to the remarkable infraclass variance, the same character name will correspond to faces of huge variant appearances. It will be unreasonable to set the number of identical faces just according to the number of characters in the cast. Our study is motivated by these challenges and aims to find solutions for a robust framework for movie character identification.
Character Identification Methods has similarities to identifying faces in news videos. While in TV and movies, the names of characters are seldom directly shown in the subtitle or closed caption, and script/screenplay containing character names has no time stamps to align to the video. According to the utilized textual cues, the existing movie character identification methods roughly divide into three categories.
The existing movie character identification methods roughly divide into three categories.
 Cast List Based
 Subtitle or Closed Caption, Local Matching
1. Cast List Based
These method only utilize the cast list textual resource, hence name of the characters only represented in text format without any face identification.In the ―cast list discovery‖ problem [5], faces are clustered by appearance and faces of aparticular character are expected to be collected in a few pureclusters. They either need manual labelling or guarantee no robust clustering and classification performance due to the large intraclass variances.
image
2. Subtitle or Closed Caption, Local Matching Subtitle and closed caption provide time-stamped dialogues, which can be exploitedfor alignment to the video frames. It is based on OCR (Optical Character Recognition) technique.
image
The reference cues in the closed captions are employed as multiple instance constraints and face tracks grouping as well as face-name association are solved in a convex formulation. The local matching based methods require the time-stamped information, which is either extracted by OCR (i.e., subtitle) or unavailable for the majority of movies and TV series (i.e., closed caption). Besides, the ambiguous and partial annotation makes local matching based methods more sensitive to the face detection and tracking noises.

II.PROPOSED METHOD

A. System Architecture
From figure 4, Faces are detected automatically from video by using EmguCv function, Based on the ECGM Algorithm, the faceaffinity graph is constructed from video and the name affinity graph is constructed from script of the video, finally the original affinity graph is constructed using ECGMalgorithm.
Graph edit operation can also perform to do add, delete substitution of vertex and edges in the graph. It will improve performance as well as robustness is demonstrated in movies with large character appearance changes.For face and name graph construction, it propose to represent the character co-occurrence in rank ordinal level, which scores the strength of the relationships in a rank order
For face and name graph construction, we propose to representthe character co-occurrence in rank ordinal level [6],which scores the strength of the relationships in a rank orderfrom the weakest to strongest. Rank order data carry no numericalmeaning and thus are less sensitive to the noises.
The affinity graph used in the traditional global matching is interval measures of the co-occurrence relationship between characters. While continuous measures of the strength of relationship holds complete information, it is highly sensitive to noises.
image
In face graph vertex represent face and edges represent co-occurrence relationship of the face, similar to in name graph vertex represent name and edges represent cooccurrence relationship of the name.

B. ECGM Algorithm

ECGM is a powerful tool for graph matching with distorted inputs. It has various applications in pattern recognition and computer vision [8]. In order to measure the similarity of two graphs, graph edit operations are defined, such as the deletion, insertion and substitution of vertexes and edges. Each of these operations is further assigned a certain cost.
The costs are application dependent and usually reflect the likelihood of graph distortions. The more likely a certain distortion is to occur, the smaller is its cost. Through error correcting graph matching, we can define appropriate graph edit operations according to the noise investigation and design the edit cost function to improve the performance.
Let L be a finite alphabet of labels for vertex and edges.
A graph is a triple g=( V,A,B) , where V is the finite set of vertexes, A: V  L is vertex labelling function, and B : E  L is edge labelling function. The set of edges E is implicitly given by assuming that graphs are fully connected, i.e., E = V X V. For the notational convenience, node and edge labels come from the same alphabet. Let g1 = ( v1,a1,b1) and g2 = ( v2,a2,b2 ) be two graphs. An ECGM from g1 to g2 is a objectivefunction f : V1~  V2~ where V1~ => V1 and V2~ =>V2

III.EXPERIMENTAL RESULTS ANDSENSITIVITY ANALYSIS

A. Experimental Results
In movie character identification, sensitivity analysis offers valid tools for characterizing the robustness of the algorithms. According with introduction of following two types of noises experimental results are analyses in Face affinity and name affinity graph of the original graph.
Coverage noise – Based on the graph edit operations of edges creation and destruction to simulate the changes to the topology of the graph.
Intensity noise- Intensity noise corresponds to changes in the weights of the edges. Ithas involvement with the quantitative variation of the edges, but with no affection to the graph structure.
Original affinity value denoted as
R = { rij } N1XN2 (1)
Where N1 is the number of face in face graph and N2 is the number of name (character) in name graph.Where rij is the rank index of original diagonal affinity value Zero-cell represents that no co-occurrence relationship is specially considered, which a qualitative measure is. From the perspective of graph analysis, there is no edge between the vertexes of row and column for the zero-cell. Therefore, change of zero-cell involves with changing the graph structure or topology From the perspective of graph analysis, there is no edge between the vertexes of row and column for the zero-cell. Therefore, change of zero-cell involves with changing the graph structure or topology. To distinguish the zero-cell change, for each row in the original affinity matrix, we remain the zero-cell unchanged. The number of zero-cells in the ith row is recorded as null. Other than the region cell and zero-cell.
B. Sensitivity Analysis
The sensitivity analysis score for measuring face affinity value and Name affinity value should be same. That will improve the overall performance of the automatic face – character identification of the video.
Face Affinity Graph Construction: Face affinity graph is constructed from detected faces based on co-occurrence relationship without clustering of faces, that represent each and every expression of particular face. It can also support edit operation such as insertion deletion of particular face in this face graph detected faces.
The following table 1 shows Experimental Results of Sensitivity Analysis for face affinity graph. The value of the face affinity Graph is based on the cooccurrence relationship,Such as character Face 1 has more affinities with face 2 than Face3, Face 4, Face 1 has never co-occurred with character Face3, etc.
image
measure is. From the perspective of graph analysis, there is no edge between the vertexes of row and column for the zero-cell. Therefore, change of zero-cell involves with changing the graph structure or topology
Delighted from this, It assume that while the absolute quantitative affinity values are changeable, the relative affinity relationships between characters (e.g., Face1 is closer to Face2 than to Face3) and the qualitative affinity values (e.g., whether Face2 has co-occurred with Face3 and 4) usually remain unchanged. In this paper, It utilize the preserved statistic properties and propose to represent the character co-occurrence in rank order.
The following table 2 shows Experimental Results of Sensitivity Analysis for Name affinity graph. The value of the Name affinity Graph is based on the cooccurrencerelationship, Such as character Name1 has more affinities with Name2 than Name3 and Name4, Name1 has never co-occurred with character Name3, etc.
image
Name Affinity Graph Construction:Name affinity graph is constructed from script of the video. Name affinity graph also support the edit operation in edges of the name graph, so that addition and deletion of the name is possible in name graph.
Figure 6 Shows the Name Affinity topologyfor Name affinity Graph, It will change accordance with Graph edit operation in vertex and edges. The qualitative affinity values (e.g., whether Name2 has co-occurred with Name3 and4) usually remain unchanged. In this paper, itutilizes the preserved statistic properties and proposes to represent the character co-occurrence in rank order.
image

C. Data Set

The following Figure 7 Shows the training part of data set of the automatic face name character identification.
image
The training datafaces that may contain about 20–50 faces. Due to the variance of pose and expression, a face track may present multiple face exemplars. Matching different face tracks from the same person just requires that certain faces of the two sets can be matched.

IV.CONCLUSION

In this paper, the proposed method of ECGM has been used to achieve automatic character identification in movie. A graph matching method has been utilized to build name-face association between the name affinity network and the face affinity network which are, respectively, derived from their own domains (script and video).

V.FUTURE WORK

In the future, this work will improve automatic face identification of character in movie by clustering of various expressions of the particular character faces into single group and identification of the face extracted from uncontrolled movie video.

References

  1. Jitao Sang and ChangshengXu, Senior Member, IEEE Robust Face-Name Graph Matching for Movie Character Identification‖ IEEE transactions on multimedia, vol. 14, no. 3, june 2012.
  2. J. Sang, C. Liang, C. Xu, and J. Cheng, ―Robust movie character identification and the sensitivity analysis,‖ in Proc. ICME, 2011, pp. 1–6.
  3. R. G. Cinbis, J. Verbeek, and C. Schmid, ―Unsupervised metric learning for face identification in TV video,‖ in Proc. Int. Conf. Comput. Vis., 2011, pp. 1559–1566.
  4. L. Lin, X. Liu, and S. C. Zhu, ―Layered graph matching with composite cluster sampling,‖ IEEE Trans. Pattern Anal.Mach. Intell., vol. 32, no. 8, pp. 1426–1442, Aug. 2010.
  5. W. Fitzgibbon and A. Zisserman, ―On affine invariant clustering and automatic cast listing in movies,‖ in Proc. ECCV, 2002, pp. 304–320.
  6. J. Stallkamp, H. K. Ekenel, and R. Stiefelhagen, ―Video-based face recognition on real-world data,‖ in Proc. Int. Conf. Comput. Vis., 2007, pp. 1–8.
  7. Y. Zhang, C. Xu, J. Cheng, and H. Lu, ―Naming faces in films using hypergraph matching,‖ in Proc. ICME, 2009, pp. 278–281.
  8. Bengoetxea, ―Inexact graph matching using estimation of distribution algorithms,‖ Ph.D. dissertation, EcoleNationaleSupérieure des Télécommunications, Paris, France, 2003.
  9. Y. Zhang, C. Xu, H. Lu, and Y. Huang, ―Character identification in feature-length films using global face-name matching,‖ IEEE Trans. Multimedia, vsol. 11, no. 7, pp. 1276–1288, Nov. 2009.