“We’ve lost you Ian”: Multi-modal corpus innovations in capturing, processing and analysing professional online spoken interactions

Keywords: online workplace communication, corpus pragmatics, multi-modal corpus linguistics, corpus construction, transcription

Abstract

Online communication via video platforms has become a standard component of workplace interaction for many businesses and employees. The rapid uptake in the use of virtual meeting platforms due to COVID-19 restrictions meant that many people had to quickly adjust to communication via this medium without much (if any) training as to how workplace communication is successfully facilitat- ed on these platforms. The Interactional Variation Online project aims to analyse a corpus of virtual meetings to gain a multi-modal understanding of this context of language use. This paper describes one component of the project, namely guidelines that can be replicated when constructing a corpus of multi-modal data derived from recordings of online meetings. A further aim is to determine typical fea- tures of virtual meetings in comparison to face-to-face meetings so as to inform good practice in virtual workplace interactions. By looking at how non-verbal behaviour, such as head movements, gaze, pos- ture, and spoken discourse interact in this medium, we both undertake a holistic analysis of interaction in virtual meetings and produce a template for the development of multi-modal corpora for future analysis.

Downloads

Download data is not yet available.

Metrics

Metrics Loading ...

References

Allwood, Jens. 2008. Multimodal corpora. In Anke Lüdeling and Merja Kytö eds. Corpus Linguistics. An International Handbook. Berlin: Mouton de Gruyter, 207–225.

Allwood, Jens, Loredana Cerrato, Kristiina Jokinen, Costanza Navarretta and Patrizia Paggio. 2007a. The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena. Language Resources and Evaluation 41/3: 273–287.

Allwood, Jens, Stefan Kopp, Karl Grammer, Elisabeth Ahlsén, Elisabeth Oberzaucher and Markus Koppensteiner. 2007b. The analysis of embodied communicative feedback in multimodal corpora: A prerequisite for behavior simulation. Language Resources and Evaluation 4/3: 255–272.

BAAL = The British Association for Applied Linguistics. 2021. Recommendations on Good Practice in Applied Linguistics (fourth edition). Available at www.baal.org.uk

Biber, Douglas, Stig Johansson, Geoffrey Leech, Susan Conrad and Edward Finegan. 1999. Longman Grammar of Spoken and Written English. London: Longman.

Brunner, Marie-Louise and Stefan Diemer. 2021. Multimodal meaning making: The annotation of nonverbal elements in multimodal corpus transcription. Research in Corpus Linguistics 9/1: 63–88.

Carter, Ronald and Michael McCarthy. 2004. Talking, creating: Interactional language, creativity and context. Applied Linguistics 25/1: 62–88.

Cienki, Alan. 2016. Cognitive Linguistics, gesture studies, and multimodal communication. Cognitive Linguistics 27/4: 603–618.

Cohn, Neil. 2016. A multimodal parallel architecture: A cognitive framework for multimodal interactions. Cognition 146: 304–323.

Denham, Magdalena A. and Anthony John Onwuegbuzie. 2013. Beyond words: Using nonverbal communication data in research to enhance thick description and interpretation. International Journal of Qualitative Methods 12/1: 670–696.

Egbert, Jesse, Douglas Biber and Bethany Gray. 2022. Designing and Evaluating Language Corpora: A Practical Framework for Corpus Representativeness. Cambridge: Cambridge University Press.

Ekman, Paul and Wallace V. Friesen. 1968. Nonverbal behavior in psychotherapy research. In John M. Shlien ed. Research in Psychotherapy Volume III. Massachusetts: American Psychological Association, 179–206.

Esteve, Marc and Tamyko Ysa. 2011. Differences between the public and the private sectors? Reviewing the myth. ESADEgov e-bulletin https://esadepublic.esade.edu/posts/post/differences-between-the-public-and-the-private-sectors-reviewing-the-myth

Friedland, Gerald, Hayley Hung and Chuohao Yeo. 2009. Multi-modal speaker diarization of real-world meetings using compressed-domain video features. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. Tapei: IEE, 4069–4072.

Goodwin, Charles. 1994. Professional Vision. American Anthropologist 96/3: 606–633.

Halverson, Erica Rosenfeld, Michelle Bass and David Woods. 2012. The process of creation: A novel methodology for analysing multimodal data. The Qualitative Report 17/11: 1–27.

Handford, Michael. 2010. The Language of Business Meetings. Cambridge: Cambridge University Press.

Harrison, Claire. 2003. Visual social semiotics: Understanding how still images make meaning. Technical Communication 50/1: 46–60.

Holler, Judith and Geoffrey Beattie. 2002. A micro-analytic investigation of how iconic gestures and speech represent core semantic features in talk. Semiotica 142/1: 31–69.

Holler, Judith and Stephen C. Levinson. 2019. Multimodal language processing in human communication. Trends in Cognitive Sciences 23/8: 639–652.

Ibrahim, Blend and Ahmad Aljarah. 2023. The era of Instagram expansion: Matching social media marketing activities and brand loyalty through customer relationship quality. Journal of Marketing Communications 29/1: 1–25.

Knight, Dawn. 2011. The future of multimodal corpora. Brazilian Journal of Applied Linguistics 11/2: 391–416.

Knight, Dawn and Svenja Adolphs. 2008. Multi-modal corpus pragmatics: The case of active listenership. In Jesús Romero-Trillo ed. Pragmatics and Corpus Linguistics: A Mutualistic Entente. New York: Mouton De Gruyter,175–190.

Knight, Dawn and Svenja Adolphs. 2020. Multimodal corpora. In Stefan Gries and Magali Paquot eds. A Practical Handbook of Corpus Linguistics. Paris: Springer, 353–371.

Knight, Dawn and Svenja Adolphs. 2022. Building a spoken corpus. In Anne O’Keeffe and Michael McCarthy eds. The Routledge Handbook of Corpus Linguistics. London: Routledge, 21–34.

Knight, Dawn, Steve Morris, Laura Arman, Jennifer Needs and Mair Rees. 2021. Building a National Corpus: A Welsh Language Case Study. London: Palgrave.

Knight, Dawn, Anne O’Keeffe, Geraldine Mark, Chris Fitzgerald, Justin McNamara, Svenja Adolphs, Benjamin Cowan, Tania Fahey-Palma, Fiona Farr and Sandrine Peraldi. In press. Interactional Variation Online (IVO): Corpus approaches to analysing multi-modality in virtual meetings. International Journal of Corpus Linguistics.

Lausberg, Hedda. 2019. The NEUROGES® Analysis System for Nonverbal Behavior and Gesture. The Complete Research Coding Manual including an Interactive Video Learning Tool and Coding Template. Berlin: Peter Lang.

Levinson, Stephen C. and Judith Holler. 2014. The origin of human multi-modal communication. Philosophical Transactions of the Royal Society B: Biological Sciences. https://doi.org/10.1098/rstb.2013.0302

Lin, Phoebe and Yaoyao Chen. 2020. Multimodality I: Speech, prosody and gestures. In Svenja Adolphs and Dawn Knight eds. The Routledge Handbook of English Language and Digital Humanities. London: Routledge, 66–84.

Love, Robbie. 2020. Overcoming Challenges in Corpus Construction. London: Routledge.

Lücking, Andy, Kirsten Bergmann, Florian Hahn, Stefan Kopp and Hannes Rieser. 2010. The Bielefeld Speech and Gesture Alignment corpus (SaGA). In Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner and Daniel Tapias eds. Proceedings of the 7th International Conference on Language Resources and Evaluation. Valletta: English Language Resource Association, 92–98.

Mackenzie, Jai. 2020. Digital interaction. In Svenja Adolphs and Dawn Knight eds. The Routledge Handbook of English Language and Digital Humanities. London: Routledge, 49–65.

McCarthy, Michael J. 1998. Spoken Language and Applied Linguistics. Cambridge: Cambridge University Press.

McNeill, David. 2000. Action and Thought. In David McNeill ed. Language and Gesture. Cambridge: Cambridge University Press, 139–140.

Milz, Dan, Atul Pokharel and Curt D. Gervich. 2023. Facilitating online participatory planning during the COVID-19 pandemic. Journal of the American Planning Association: 1–14.

Mirivel, Julien C. and Karen Tracy. 2005. Premeeting talk: An organizationally crucial form of talk. Research on Language and Social Interaction 38/1: 1–34.

Mondada, Lorenza. 2019. Contemporary issues in conversation analysis: Embodiment and materiality, multimodality and multisensoriality in social interaction. Journal of Pragmatics 145: 47–62.

O’Keeffe, Anne and Svenja Adolphs. 2008. Using a corpus to look at variational pragmatics: Response tokens in British and Irish discourse. In Anne Barron and Klaus P. Schneider eds. Variational Pragmatics. Amsterdam: John Benjamins, 69–98.

O’Keeffe, Anne, Michael J. McCarthy and Ron Carter. 2007. From Corpus to Classroom – Language Use and Language Teaching. Cambridge: Cambridge University Press.

Pak-Hin Kong, Anthony, Law Sam-Po, Connie Ching-Yin Kwan, Cristy Lai and Vivian Lam. 2015. A coding system with independent annotations of gesture forms and functions during verbal communication: Development of a Database of Speech and GEsture (DoSaGE). Journal of Nonverbal Behavior 39/1: 93–111.

Panteli, Niki and Patrick Dawson. 2001. Video conferencing meetings: Changing patterns of business communication. New Technology, Work and Employment 16/2: 88–99.

Pápay, Kinga, Szilvia Szeghalmy and István Szekrényes. 2011. HuComTech Multimodal Corpus Annotation. Argumentum 7: 330–347.

Ringeval, Fabien, Andreas Sonderegger, Juergen Sauer and Denis Lalanne. 2013. Introducing the RECOLA multimodal corpus of remote collaborative and affective interactions. In Rama Chellappa, Xilin Chen, Qiang Ji, Maja Pantic, Stan Sclaroff and Lijun Yin eds. Proceedings of the 10th IEEE International Conference on Automatic Face and Gesture Recognition. Shanghai: Curran Associates, 1–8.

Rohrer, Patrick Louis, Ingrid Vilà-Giménez, Júlia Florit-Pons, Núria Esteve-Gibert, Ada Ren, Stefanie Shattuck-Hufnagel and Pilar Prieto. 2020. The Multimodal Multidimensional (M3D) Labelling Scheme for the Annotation of Audiovisual Corpora. Gesture and Speech in Interaction Conference. Stockholm: University of Stockholm.

Rühlemann, Christoph and Alexander Ptak. 2023. Reaching beneath the tip of the iceberg: A guide to the Freiburg Multimodal Interaction Corpus. Open Linguistics 9/1: 20220245. https://doi.org/10.1515/opli-2022-0245

Schwartzman, Helen B. 1989. The Meeting: Gatherings in Organizations and Communities. New York: Plenum Press.

Schmidt, Thomas and Kai Wörner. 2014. EXMARaLDA. In Jacques Durand, Ulrike Gut and Gjert Kristoffersen eds. Handbook on Corpus Phonology. Oxford: Oxford University Press, 402–419.

Svennevig, Jan. 2012. The agenda as resource for topic introduction in workplace meetings. Discourse Studies 14/1: 53–66.

Trotta, Daniela, Alessio Palmero Aprosio, Sara Tonelli and Elia Annibale. 2020. Adding gesture, posture and facial displays to the polimodal corpus of political interviews. In Nicoletta Calzolari (Conference chair), Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk and Stelios Piperidiseds eds. Proceedings of the 12th Language Resources and Evaluation Conference. Marseille: European Language Resources Association, 4320–4326.

Wittenburg, Peter, Hennie Brugman, Albert Russel, Alex Klassmann and Han Sloetjes. 2006. ELAN: A professional framework for multimodality research. In Nicoletta Calzolari, Khalid Choukri, Aldo Gangemi, Bente Maegaard, Josheph Mariani, Jan Odijk and Daniel Tapias eds. Proceedings of the 5th International Conference on Language Resources and Evaluation. Genoa: European Language Resources Association, 1556–1559.

Published
2024-02-20
How to Cite
O’Keeffe, A., Knight, D., Mark, G., Fitzgerald, C., McNamara, J., Adolphs, S., Cowan, B., Fahey Palma, T., Farr, F., & Peraldi, S. (2024). “We’ve lost you Ian”: Multi-modal corpus innovations in capturing, processing and analysing professional online spoken interactions. Research in Corpus Linguistics, 12(2), 1–23. https://doi.org/10.32714/ricl.12.02.02