Shruti Palaskar bio photo

Shruti Palaskar

Graduate Student
Carnegie Mellon University
Pittsburgh, PA.

Email Twitter LinkedIn Github

Hi!

I am a PhD student at the Language Technologies Institute of the School of Computer Science at Carnegie Mellon University. My research interests lie in the areas of multimodal machine learning, speech recognition and natural language processing. I am fortunate to be advised by Prof. Florian Metze and Prof. Alan W Black. I also work closely with Prof. Yonatan Bisk.

My research is supported by fellowships from Facebook (2019-2021), Center for Machine Learning and Health (2018-2019), and CMU (2016-2018).

I am/have interned at Allen Institute for AI (AI2) Summer ‘21, Facebook AI (Summer ‘20), Abridge AI (Summer ‘19), and Johns Hopkins Summer Workshops (Summer ‘17 and ‘18). Prior to starting my PhD, I received my Masters degree in Language Technologies from LTI, CMU in 2018 and Bachelors degree in Computer Engineering from Pune Institute of Computer Technology in 2016.


Updates

[Jun 2021] New paper on Multimodal Speech Summarization through Semantic Concept Learning is accepted at INTERSPEECH 2021.
[May 2021] I will be interning at the Allen Institute for AI (AI2) this summer on Multimodal Rationalization with Ana Marasović.
[Apr 2021] I successfully proposed my thesis titled Multimodal Learning from Videos: Exploring Models and Task Complexities. [PDF] [Slides]
[Mar 2021] New paper on How2Sign: A Large-Scale Multimodal Dataset for Continuous American Sign Language is accepted at CVPR 2021.
[Nov 2020] Leading the SCS, CMU effort for PhD Applicant Mentoring. PhD applicants for 2021, check out: https://www.cs.cmu.edu/gasp
[Oct 2020] Invited to participate in the Rising Star in EECS Workshop at UC Berkeley.
[May 2020] Excited to join Facebook AI as an intern.
[Apr 2020] Gave a lecture on Multimodality in 11-4/611 NLP at LTI, CMU. [slides]
[Jan 2020] Co-chair of the Socio-cultural Diversity and Inclusion committee for ACL 2020
[Oct 2019] Talk on Learning from Large-Scale Instructional Videos at IBM Research, Yorktown Heights.
[Sep 2019] Talk on Learning from Large-Scale Instructional Videos at Facebook AI, Menlo Park.
[Jul 2019] Talk on Multimodal Acoustic Word Embeddings at the University of Copenhagen, Denmark.
[Mar 2019] Talk on Multimodal Acoustic Word Embeddings at the 6th Amazon Graduate Student Symposium, Seattle. Slides here!
[Feb 2019] Co-organizer of the ICML 2019 How2 Challenge and Workshop. If you work on anything multimodal, hope to see you there!
[Jan 2019] Check out the special session on Multimodal Representation Learning for Language Generation and Understanding at ICASSP 2019.
[Dec 2018] Received the Facebook Fellowship. Thank you Facebook!
[Nov 2018] The How2 dataset of open-domain instructional videos has been released! Check it out for different multimodal modeling tasks!
[Oct 2018] Ramon and I won the first place in the audio-visual track of DSTC7. We will present this at AAAI 2019 in Hawaii.
[Sep 2018] PhD student panelist at the Young Female Researchers in Speech Workshop at Interspeech 2018
[Jul 2018] Received the 2018-2019 Center for Machine Learning and Health PhD Fellowship. Thank you CMLH!
[Sep 2016] Received the CMU LTI Graduate Research Fellowship for 2016-2018