Human motion analysis has been an active topic from the early beginning of computer vision due to its relevance to a large variety of domains. It is becoming a central key for different types of application including gaming, monitoring, sign language recognition and medical applications. These applications extend from simple gesture detection to complex behavior understanding, and depend on body parts involved and duration of movement. This topic has evolved substantially in parallel with major technological advancements, especially capturing technologies and machine learning techniques. The main concern of this dissertation is the issue of human behavior understanding trough vision-based analysis of the motion limited to body behavior, which can be conceptually categorized into different types of motion modalities: gestures, actions, activities and grained-fine hand gestures. Our aims were to develop new theoretical and application approaches advancing the motion representation and the recognition of human behavior involving different body part and based on various sources of information, such as 3D mesh, depth and skeleton data. Since movements unfold in both space and time, it is mandatory to provide solutions that describe its spatial and temporal properties and examine how variations in both spaces influence the recognition of the meaning of the motion. For this purpose, we proposed a number of motion representation and recognition frameworks, developed new theoretical and application approaches, and demonstrated their efficiency on several tasks of motion recognition, including gestures, actions and activities.
defended on 05/12/2018