Posted in Compressed Video, Computer Vision, dcapi, DCAPI blog, I-Frame, PhD, Research, Semantic Video Annotation, Video Analysis, video information retrieval, Video Matching, Video search engine, viva

Dr Saddam Bekhet, successfully passed viva

Congratulations to Saddam who successfully passed his PhD VIVA on 23rd May 2016.

Examiners have commended Saddam’s work and contributions. They also emphasized how well written the thesis is.

A well-deserved achievement Saddam, well done.

And all the best for your future career.

Posted in commonsense knowledgebases, Compressed Video, Computer Vision, dcapi, DCAPI blog, DHS, Elderly, EPSRC network, I-Frame, iNET Scooter, International, Knowledge Engineering, Language, NDA program, PhD, Research, research project, semantic gap, Semantic Video Annotation, Surgeon training, SUS-IT, Uncategorized, Video Analysis, video information retrieval, Video Matching, Video search engine, Virtual Reality, Virtual training
Interested in joining us as a “Research Fellow”?
Get in touch by emailing aahmed@lincoln.ac.uk

Welcome to the DCAPI research group.

Our multi-disciplinary research is mainly focused on the analysis and mining of digital contents (Visual; images and videos, and textual). This includes Computer Vision, Image/Video Processing and analysis, Semantic Analysis, annotation, Action recognition, Image/Video Matching and similarity (Copy & Near-Duplicate detection), and many others.

We welcome any discussion and potential collaboration. Please get in touch with us (contacts on the right side-bar).

Posted in commonsense knowledgebases, Compressed Video, Computer Vision, dcapi, DCAPI blog, I-Frame, Knowledge Engineering, PhD, Research, research project, semantic gap, Semantic Video Annotation, Video Analysis, video information retrieval, Video Matching, Video search engine

Featured Research topics

Posted in Computer Vision, Computer Visions, Conference, dcapi, DCAPI blog, PhD, Research, Uncategorized, Video Analysis, video information retrieval, Video search engine

Conference paper presented in “ World Congress on Engineering 2013”

Saddam Bekhet presented his accepted paper in “World Congress on Engineering 2013“.
The paper title is “Video Matching Using DC-image and Local Features ”

Abstract:

This paper presents a suggested framework for video matching based on local features extracted from the DC-image of MPEG compressed videos, without decompression. The relevant arguments and supporting evidences are discussed for developing video similarity techniques that works directly on compressed videos, without decompression, and especially utilising small size images. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and the corresponding computation complexity. The second experiment compares between using local features and global features in video matching, especially in the compressed domain and with the small size images. The results confirmed that the use of DC-image, despite its highly reduced size, is promising as it produces at least similar (if not better) matching precision, compared to the full I-frame. Also, using SIFT, as a local feature, outperforms precision of most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the real-time margin. There are also various optimisations that can be done to improve this computation complexity.

Well done and congratulations to Saddam Bekhet .
20130704_112542

Posted in Computer Vision, Computer Visions, Conference, dcapi, DCAPI blog, Video Analysis, video information retrieval

New Conference paper Accepted to the “ World Congress on Engineering”: Video Matching Using DC-image and Local Features

New Conference  paper Accepted to the “ World Congress on Engineering”

New Conference paper accepted for publishing in  “World Congress on Engineering 2013“.

The paper title is “Video Matching Using DC-image and Local Features ”

Abstract:

This paper presents a suggested framework for video matching based on local features extracted from the DC-image of MPEG compressed videos, without decompression. The relevant arguments and supporting evidences are discussed for developing video similarity techniques that works directly on compressed videos, without decompression, and especially utilising small size images. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and the corresponding computation complexity. The second experiment compares between using local features and global features in video matching, especially in the compressed domain and with the small size images. The results confirmed that the use of DC-image, despite its highly reduced size, is promising as it produces at least similar (if not better) matching precision, compared to the full I-frame. Also, using SIFT, as a local feature, outperforms precision of most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the real-time margin. There are also various optimisations that can be done to improve this computation complexity.

Well done and congratulations to Saddam Bekhet .

Posted in commonsense knowledgebases, Computer Visions, Conference, dcapi, DCAPI blog, Research, semantic gap, Semantic Video Annotation, Video Analysis, video information retrieval, Video search engine

New Journal paper Accepted to the “Multimedia Tools and Applications”

New Journal paper accepted for publishing in the Journal of “Multimedia Tools and Applications“.

The paper title is “A Framework for Automatic Semantic Video Annotation utilising Similarity and Commonsense Knowledgebases

Abstract:

The rapidly increasing quantity of publicly available videos has driven research into developing automatic tools for indexing, rating, searching and retrieval. Textual semantic representations, such as tagging, labelling and annotation, are often important factors in the process of indexing any video, because of their user-friendly way of representing the semantics appropriate for search and retrieval. Ideally, this annotation should be inspired by the human cognitive way of perceiving and of describing videos. The difference between the low-level visual contents and the corresponding human perception is referred to as the ‘semantic gap’. Tackling this gap is even harder in the case of unconstrained videos, mainly due to the lack of any previous information about the analyzed video on the one hand, and the huge amount of generic knowledge required on the other.

This paper introduces a framework for the Automatic Semantic Annotation of unconstrained videos. The proposed framework utilizes two non-domain-specific layers: low-level visual similarity matching, and an annotation analysis that employs commonsense knowledgebases. Commonsense ontology is created by incorporating multiple-structured semantic relationships. Experiments and black-box tests are carried out on standard video databases for
action recognition and video information retrieval. White-box tests examine the performance of the individual intermediate layers of the framework, and the evaluation of the results and the statistical analysis show that integrating visual similarity matching with commonsense semantic relationships provides an effective approach to automated video annotation.

 

Well done and congratulations to Amjad Altadmri .