000 06511nam a22006735i 4500
001 978-3-642-00525-1
003 DE-He213
005 20240423125810.0
007 cr nn 008mamaa
008 100301s2009 gw | s |||| 0|eng d
020 _a9783642005251
_9978-3-642-00525-1
024 7 _a10.1007/978-3-642-00525-1
_2doi
050 4 _aQ337.5
050 4 _aTK7882.P3
072 7 _aUYQP
_2bicssc
072 7 _aCOM016000
_2bisacsh
072 7 _aUYQP
_2thema
082 0 4 _a006.4
_223
245 1 0 _aMultimodal Signals: Cognitive and Algorithmic Issues
_h[electronic resource] :
_bCOST Action 2102 and euCognition International School Vietri sul Mare, Italy, April 21-26, 2008, Revised Selected and Invited Papers /
_cedited by Anna Esposito, Amir Hussain, Maria Marinaro, Raffaele Martone.
250 _a1st ed. 2009.
264 1 _aBerlin, Heidelberg :
_bSpringer Berlin Heidelberg :
_bImprint: Springer,
_c2009.
300 _aXIII, 348 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aLecture Notes in Artificial Intelligence,
_x2945-9141 ;
_v5398
505 0 _aInteractive and Unsupervised Multimodal Systems -- Multimodal Human Machine Interactions in Virtual and Augmented Reality -- Speech through the Ear, the Eye, the Mouth and the Hand -- Multimodality Issues in Conversation Analysis of Greek TV Interviews -- Representing Communicative Function and Behavior in Multimodal Communication -- Using the iCat as Avatar in Remote Meetings -- Using Context to Disambiguate Communicative Signals -- Modeling Aspects of Multimodal Lithuanian Human - Machine Interface -- Using a Signing Avatar as a Sign Language Research Tool -- Data Fusion at Different Levels -- Voice Technology Applied for Building a Prototype Smart Room -- Towards Facial Gestures Generation by Speech Signal Analysis Using HUGE Architecture -- Multi-modal Speech Processing Methods: An Overview and Future Research Directions Using a MATLAB Based Audio-Visual Toolbox -- From Extensity to Protensity in CAS: Adding Sounds to Icons -- Statistical Modeling of Interpersonal Distance with Range Imaging Data -- Verbal and Nonverbal Communication Signals -- How the Brain Processes Language in Different Modalities -- From Speech and Gestures to Dialogue Acts -- The Language of Interjections -- Gesture and Gaze in Persuasive Political Discourse -- Content in Embedded Sentences -- A Distributional Concept for Modeling Dialectal Variation in TTS -- Regionalized Text-to-Speech Systems: Persona Design and Application Scenarios -- Vocal Gestures in Slovak: Emotions and Prosody -- Spectrum Modification for Emotional Speech Synthesis -- Comparison of Grapheme and Phoneme Based Acoustic Modeling in LVCSR Task in Slovak -- Automatic Motherese Detection for Face-to-Face Interaction Analysis -- Recognition of Emotions in German Speech Using Gaussian Mixture Models -- Electroglottogram Analysis of Emotionally Styled Phonation -- Emoticonsciousness -- Urban Environmental Information Perception and Multimodal Communication: The Air Quality Example -- Underdetermined Blind Source Separation Using Linear Separation System -- Articulatory Synthesis of Speech and Singing: State of the Art and Suggestions for Future Research -- Qualitative and Quantitative Crying Analysis of New Born Babies Delivered Under High Risk Gestation -- Recognizing Facial Expressions Using Model-Based Image Interpretation -- Face Localization in 2D Frontal Face Images Using Luminosity Profiles Analysis.
520 _aThis book constitutes the thoroughly refereed post-conference proceedings of the COST Action 2102 and euCognition supported international school on Multimodal Signals: "Cognitive and Algorithmic Issues" held in Vietri sul Mare, Italy, in April 2008. The 34 revised full papers presented were carefully reviewed and selected from participants’ contributions and invited lectures given at the workshop. The volume is organized in two parts; the first on Interactive and Unsupervised Multimodal Systems contains 14 papers. The papers deal with the theoretical and computational issue of defining algorithms, programming languages, and determinist models to recognize and synthesize multimodal signals. These are facial and vocal expressions of emotions, tones of voice, gestures, eye contact, spatial arrangements, patterns of touch, expressive movements, writing patterns, and cultural differences, in anticipation of the implementation of intelligent avatars and interactive dialogue systems that could be exploited to improve user access to future telecommunication services. The second part of the volume, on Verbal and Nonverbal Communication Signals, presents 20 original studies devoted to the modeling of timing synchronisation between speech production, gestures, facial and head movements in human communicative expressions and on their mutual contribution for an effective communication.
650 0 _aPattern recognition systems.
650 0 _aArtificial intelligence.
650 0 _aUser interfaces (Computer systems).
650 0 _aHuman-computer interaction.
650 0 _aApplication software.
650 0 _aMultimedia systems.
650 0 _aComputers and civilization.
650 1 4 _aAutomated Pattern Recognition.
650 2 4 _aArtificial Intelligence.
650 2 4 _aUser Interfaces and Human Computer Interaction.
650 2 4 _aComputer and Information Systems Applications.
650 2 4 _aMultimedia Information Systems.
650 2 4 _aComputers and Society.
700 1 _aEsposito, Anna.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
700 1 _aHussain, Amir.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
700 1 _aMarinaro, Maria.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
700 1 _aMartone, Raffaele.
_eeditor.
_4edt
_4http://id.loc.gov/vocabulary/relators/edt
710 2 _aSpringerLink (Online service)
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783642005244
776 0 8 _iPrinted edition:
_z9783642005268
830 0 _aLecture Notes in Artificial Intelligence,
_x2945-9141 ;
_v5398
856 4 0 _uhttps://doi.org/10.1007/978-3-642-00525-1
912 _aZDB-2-SCS
912 _aZDB-2-SXCS
912 _aZDB-2-LNC
942 _cSPRINGER
999 _c181826
_d181826