Posted in Uncategorized

PhD Studentship – “Object and Action Recognition”

Summary:  “Object and Action Recognition Assisted by Computational Linguistics”.

The aim of this project is to investigate how computer vision methods such as object and
action recognition may be assisted by computational linguistic models, such as WordNet.
The main challenge of object and action recognition is the scalability of methods from
dealing with a dozen of categories (e.g. PASCAL VOC) to thousands of concepts (e.g.
ImageNet ILSVRC). This project is expected to contribute to the application of automated
visual content annotation and more widely to bridging the semantic gap between
computational approaches of vision and language.

Deadline: 20th March 2015.

A PhD studentship is advertised at

Object and Action Recognition Assisted by Computational Linguistics

The research is collaboration between University of Kingston (Digital Imaging Research Centre) and University of Lincoln (DCAPI group)

For details of the applications process:

Deadline: 20th March 2015

Posted in Uncategorized

Text Analysis for Health Related Applications, from UGC

Text Analysis of User-Generated Contents for Health Related Applications

Deema Abdal Hafeth, Amr Ahmed, David Cobham

Downloads:   Paper (PDF) ,   Dataset 

Poster - Download PDF from the link below
Poster – Download PDF from the link below

Text Analysis for Health Applications_Poster




Clinical reports includes valuable medical-related information in free-form text which can be extremely useful in aiding/providing better patient care. Text analysis techniques have demonstrated the potential to unlock such information from text. I2b2* designed a smoking challenge requiring the automatic classification of patients in relation to smoking status, based on clinical reports (Uzuner Ö et al,2008) . This was motivated by the benefits that such classification and similar extractions can be useful in further studies/research, e.g. asthma studies.


Aim & Motivation


Our aim is to investigate the potential of achieving similar results by analysing the increasing and widely available/accessible online user-generated contents (UGC), e.g. forums. This is motivated by the fact that clinical reports are not widely available and has a long and rigorous process to approve any access.


We also aimed at investigating appropriate compact feature sets that facilitate further level of studies; e.g. Psycholinguistics, as explained later.




•Data collected, systematically and with set criteria, from web forums.
•Some properties of the text, for forum data and clinical reports, were extracted to compare the writing style in clinical and forum (shown to the left and below).
•Machine learning (Support Vector Machine) classifier model was built from the collected data, using a baseline feature sets (as per the I2B2 challenge), for each data set (clinical and forum)
•Another model was built using a new feature set LIWC (Linguistic Inquiry and Word Count) + POS (Part of Speech) , for each data set (clinical and forum).
•Smoking status classification accuracy was calculated for each of the above models on each dataset.



•In general, the classification accuracy from forum posts is  found to be in line with the baseline results done on clinical records (figure 1).
•Using LIWC+POS features (125 feature) for classification obtained slightly less accuracy, compared to baseline features (>20K feature), but the feature set is compact and facilitates further levels of studies (Psycholinguistic)
•Different factors that affect classification accuracy of forum posts, with LIWC+POS, have been explored,  such as (figure 2):


o  Post’s length (number of words).
o  Data set size (number of posts).
o  Removing parts of the features .

Conclusion & Future work


The results suggest that analysing user-generated contents, such as forums, can be as useful as clinical reports. The proposed LIWC+POS feature set, while achieving comparable results, is highly compact and facilitates further levels of studies (e.g. Psycholinguistics).


We expect our work to be for health researchers, medical industrial, by providing them with tools to quantify and better understand people smoking relation and how they behave online, and for forum members, by enriching their use of this rapidly developing and increasingly popular medium by searching for peoples who are in the same situation.


For future work:

• Improve the classification accuracy, with LIWC+POS, and use this feature set as a tool for further and deeper analysis of a person’s emotion and psychological status at various stages of the stop smoking process (in journey to stop smoking).


•Integrating other lexical dictionary such as WordNet to capture more colloquiall words and expressions which are not included in LIWC dictionary.




Uzuner Ö, Goldstein I, Kohane I. Identifying patient smoking status from medical discharge records. J Am Med Inform. 2008;15(1):14-24. PMID:17947624.



Posted in commonsense knowledgebases, Compressed Video, Computer Vision, dcapi, DCAPI blog, DHS, Elderly, EPSRC network, I-Frame, iNET Scooter, International, Knowledge Engineering, Language, NDA program, PhD, Research, research project, semantic gap, Semantic Video Annotation, Surgeon training, SUS-IT, Uncategorized, Video Analysis, video information retrieval, Video Matching, Video search engine, Virtual Reality, Virtual training
Interested in joining us as a “Research Fellow”?
Get in touch by emailing

Welcome to the DCAPI research group.

Our multi-disciplinary research is mainly focused on the analysis and mining of digital contents (Visual; images and videos, and textual). This includes Computer Vision, Image/Video Processing and analysis, Semantic Analysis, annotation, Action recognition, Image/Video Matching and similarity (Copy & Near-Duplicate detection), and many others.

We welcome any discussion and potential collaboration. Please get in touch with us (contacts on the right side-bar).

Posted in Computer Vision, Computer Visions, Conference, dcapi, DCAPI blog, PhD, Research, Uncategorized, Video Analysis, video information retrieval, Video search engine

Conference paper presented in “ World Congress on Engineering 2013”

Saddam Bekhet presented his accepted paper in “World Congress on Engineering 2013“.
The paper title is “Video Matching Using DC-image and Local Features ”


This paper presents a suggested framework for video matching based on local features extracted from the DC-image of MPEG compressed videos, without decompression. The relevant arguments and supporting evidences are discussed for developing video similarity techniques that works directly on compressed videos, without decompression, and especially utilising small size images. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and the corresponding computation complexity. The second experiment compares between using local features and global features in video matching, especially in the compressed domain and with the small size images. The results confirmed that the use of DC-image, despite its highly reduced size, is promising as it produces at least similar (if not better) matching precision, compared to the full I-frame. Also, using SIFT, as a local feature, outperforms precision of most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the real-time margin. There are also various optimisations that can be done to improve this computation complexity.

Well done and congratulations to Saddam Bekhet .