LIN/CSE 667: Advanced Topics in Computational Linguistics
Instructor Name: Dr. Cassandra Jacobs
Class Day and Time: MW 9:00-10:20AM
Number of Credits: 1-3 units
Email Address: cxjacobs@buffalo.edu
Office Location: 614 Baldy Hall
Office Hours: On Zoom by appointment
Course description
This course aims to provide students with an overview of the key areas which make up the field called Computational Linguistics, an understanding of the major challenges of the field as well as the major application areas for language processing techniques, and the skills to implement fundamental language processing algorithms. This course is dual listed between CSE 667LEC and LIN 667LEC.
Required Text and Materials
All reading materials will be made available on UBLearns as well as the course webpage and will consist primarily of journal articles or conference proceedings.
Theme: Linguistic probes and linguistic representations in large language models
The landscape for natural language processing (NLP) has changed dramatically in the past decade with the explosion of neural language models for downstream NLP tasks. A persistent challenge, however, is that prior methods, e.g., structured perceptron approaches, random forests, etc., were significantly more transparent from the perspective of the modeler. Given the nature of the model representations, it was trivial to inspect why a classifier made the decision it did and cheap and easy to perform causal inference by performing feature ablation. While deep learning largely rejects interpretability, it is of great importance to scientists and practitioners of NLP to be able to understand what is encoded in neural language models concretely, e.g., linguistic or statistical regularities. This course is a deep dive into the literature on probing for linguistic factors (e.g., syntactic structure) in the decisions of neural language models, in addition to the statistical regularities that they encode (e.g., linguistic bias).
Goals
Course Learning Outcome
Instructional Methods
Assessment Methods
Obtain familiarity with probing methods to test for the presence of linguistic structure in neural language model representations
Classroom presentation; leading class discussion; course reading posts; classroom participation
Completion rubrics
Understand the effects of including linguistic features or metadata as part of neural language model feature engineering; better understand types of probes and fine-tuning methods
Course readings and classroom participation
Forum participation on Blackboard/UBLearns
Obtain fluency in communication of scientific results and discussion of scientific topics
Classroom presentation; leading class discussion; final project
Presentation rubric; forum participation on Blackboard/UBLearns
Obtain competency in the design and completion of a research project
Final project
Completion rubrics
Weekly class/lecture structure
The course is primarily conducted using a mix of instructor-guided discussion and student-led discussion. Each week on Wednesdays, we will discuss a broad linguistic domain relevant to doing NLP with deep learning, why interpretability is critical for that domain, and the technological tools that have been developed to probe different structures in that linguistic domain. Particular attention will be paid to the nature of the datasets leveraged or generated during lecture. On Mondays, two students will each present one paper from the current theme. Then, by the following Wednesday, every non-presenter will submit a review for papers from the preceding week.
Week
Day
Task
1
Wednesday
Topic 1: Instructor-guided discussion
2
Monday
Topic 1: Student-led discussion
2
Wednesday
Topic 2: Instructor-guided discussion
3
Monday
Topic 2: Student-led discussion
3
Wednesday
Topic 3: Instructor-guided discussion (Topic 3)
Topic 1: Paper reviews
4
Monday
Topic 3: Student-led discussion
4
Wednesday
Topic 4: Instructor-guided discussion
Paper reviews (Topic 2)
Grade composition
Weight (percent of overall grade)
Assignment
40%
Weekly reviews posted to UBLearns
20%
Student-led presentation & discussion
20%
Final project paper
10%
Final project presentation
10%
Participation in class and monthly self-assessments
Weekly paper reviews
As part of this course, you will be asked to provide an ACL Rolling Review-style paper review for one of two selected papers from each week. The goal of these assignments is to encourage you to analyze research work from a scientific perspective. Do not be afraid if the task is challenging â it is expected that the quality of your reviews will improve over the course of the semester as you gain familiarity with the literature and obtain feedback on your reviews. More details are available here: https://aclrollingreview.org/reviewform
Andreas Madsen, Siva Reddy, and Sarath Chandar. 2022. Post-hoc Interpretability for Neural NLP: A Survey. ACM Computing Surveys. Just Accepted (June 2022). https://doi.org/10.1145/3546577
September 7: Instructor-guided discussion on Interpreting Neural NLP - Discussion due September 6 at 10pm
Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 3543â3556, Minneapolis, Minnesota. Association for Computational Linguistics. https://aclanthology.org/N19-1357/
Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not Explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language
Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11â20, Hong Kong, China. Association for Computational Linguistics. https://aclanthology.org/D19-1002/
Tenney, I., Das, D., & Pavlick, E. (2019, July). BERT Rediscovers the Classical NLP Pipeline. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 4593-4601). https://aclanthology.org/P19-1452/
Durrani, N., Sajjad, H., Dalvi, F., & Belinkov, Y. (2020, November). Analyzing Individual Neurons in Pre-trained Language Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp. 4865-4880). https://aclanthology.org/2020.emnlp-main.395/
September 12: Instructor-guided discussion on grammatical categories & in-class discussion of week 2 readings
September 14: Instructor-guided discussion on larger syntactic structures - Discussion due September 13 at 10pm - In-class âpeer reviewâ walkthrough for Week 5 assignment
Kim, N., & Smolensky, P. (2021, February). Testing for Grammatical Category Abstraction in Neural Language Models. In Proceedings of the Society for Computation in Linguistics 2021 (pp. 467-470). https://aclanthology.org/2021.scil-1.59/ - Instructor review / presentation demo
Kim, N., Rawlins, K., Van Durme, B., & Smolensky, P. (2019, July). Predicting the argumenthood of English prepositional phrases. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 6578-6585). https://dl.acm.org/doi/abs/10.1609/aaai.v33i01.33016578
Marius Mosbach, Stefania Degaetano-Ortlieb, Marie-Pauline Krielke, Badr M. Abdullah, and Dietrich Klakow. 2020. A Closer Look at Linguistic Knowledge in Masked Language Models: The Case of Relative Clauses in American English. In Proceedings of the 28th International Conference on Computational Linguistics, pages 771â787, Barcelona, Spain (Online). International Committee on Computational Linguistics. https://aclanthology.org/2020.coling-main.67
Gulordava, K., Bojanowski, P., Grave, Ă., Linzen, T., & Baroni, M. (2018, June). Colorless Green Recurrent Networks Dream Hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) (pp. 1195-1205). https://aclanthology.org/N18-1108/
Start thinking about your final projects!
Week 4 (September 19-): Syntactic Structure 2
September 19: Instructor-guided discussion on syntactic processes
[No paper review due this week]
September 21: Instructor-guided discussion on syntactic processes - Discussion due September 20 at 10pm
Aina, L., & Linzen, T. (2021, November). The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (pp. 42-57). https://aclanthology.org/2021.blackboxnlp-1.4/ - Arthur Domino
Finlayson, M., Mueller, A., Gehrmann, S., Shieber, S. M., Linzen, T., & Belinkov, Y. (2021, August). Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (pp. 1828-1843). https://aclanthology.org/2021.acl-long.144/
Zhang, Y. (2020, November). Latent Tree Learning with Ordered Neurons: What Parses Does It Produce?. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (pp. 119-125). https://aclanthology.org/2020.blackboxnlp-1.11/
Adina Williams, Andrew Drozdov*, Samuel R. Bowman; Do latent tree learning models identify meaningful structure in sentences?. Transactions of the Association for Computational Linguistics 2018; 6 253â267. https://doi.org/10.1162/tacl_a_00019
September 26: Student-led discussion on Syntactic Structure 2 - Recorded, watch on your own
Arthur Domino
September 26: Syntactic Structure 1 paper review due
September 28: Instructor-guided discussion on multilingual models - Recorded, watch on your own
Ethan A. Chi, John Hewitt, and Christopher D. Manning. 2020. Finding Universal Grammatical Relations in Multilingual BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5564â5577, Online. Association for Computational Linguistics. https://aclanthology.org/2020.acl-main.493/ - Sabiha Shaik
Taraka Rama, Lisa Beinborn, and Steffen Eger. 2020. Probing Multilingual BERT for Genetic and Typological Signals. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1214â1228, Barcelona, Spain (Online). International Committee on Computational Linguistics. https://aclanthology.org/2020.coling-main.105/
Tanti, M., van der Plas, L., Borg, C., & Gatt, A. (2021, November). On the Language-specificity of Multilingual BERT and the Impact of Fine-tuning. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (pp. 214-227). https://aclanthology.org/2021.blackboxnlp-1.15/
Blevins, T., & Zettlemoyer, L. (2022). Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models. arXiv preprint arXiv:2204.08110. https://arxiv.org/abs/2204.08110 - Sarah Sues
Email CJ about your final project plans!
Week 6 (October 3-): Semantic Knowledge 1
October 3: Student-led discussion on Multilingual representations
Sarah Sues
Sabiha Shaik
October 3: Syntactic Structure 2 paper review due
October 5: Instructor-guided discussion on lexical semantics - Discussion due October 4 at 10pm
Ivan VuliÄ, Edoardo Maria Ponti, Robert Litschko, Goran GlavaĹĄ, and Anna Korhonen. 2020. Probing Pretrained Language Models for Lexical Semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7222â7240, Online. Association for Computational Linguistics. https://aclanthology.org/2020.emnlp-main.586
GarĂ Soler, A., & Apidianaki, M. (2021). Letâs play mono-poly: BERT can reveal wordsâ polysemy level and partitionability into senses. Transactions of the Association for Computational Linguistics, 9, 825-844. https://direct.mit.edu/tacl/article-abstract/doi/10.1162/tacl_a_00400/106797 - Jacob Springborn
Aina GarĂ Soler and Marianna Apidianaki. 2020. BERT Knows Punta Cana is not just beautiful, itâs gorgeous: Ranking Scalar Adjectives with Contextualised Representations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7371â7385, Online. Association for Computational Linguistics. https://aclanthology.org/2020.emnlp-main.598/ - Benjamin Conrow-Graham
Week 7 (October 10-): Semantic Knowledge 2
October 10: Student-led discussion on Semantic Knowledge 1
Jacob Springborn
Benjamin Conrow-Graham
October 10: Multilingual representations paper review due
October 12: Instructor-guided discussion on reasoning - Discussion due October 11 at 10pm
Nora Kassner, Benno Krojer, and Hinrich SchĂźtze. 2020. Are Pretrained Language Models Symbolic Reasoners over Knowledge?. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 552â564, Online. Association for Computational Linguistics. https://aclanthology.org/2020.conll-1.45/
Bhagavatula, C., Le Bras, R., Malaviya, C., Sakaguchi, K., Holtzman, A., Rashkin, H., ... & Choi, Y. (2019, September). Abductive Commonsense Reasoning. In International Conference on Learning Representations. https://openreview.net/forum?id=Byg1v1HKDB
Nora Kassner and Hinrich SchĂźtze. 2020. Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811â7818, Online. Association for Computational Linguistics. https://aclanthology.org/2020.acl-main.698 - Venkata Sai Rohit Ayyagari
Week 8 (October 17-): Fine-tuning
October 17: Student-led discussion on Semantic Knowledge 2
Venkata Sai Rohit Ayyagari
October 17: Semantic Knowledge 1 paper review due
October 19: Instructor-guided discussion on fine-tuning language models - Discussion due October 18 at 10pm
Merchant, A., Rahimtoroghi, E., Pavlick, E., & Tenney, I. (2020, November). What Happens To BERT Embeddings During Fine-tuning?. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (pp. 33-44). https://aclanthology.org/2020.blackboxnlp-1.4/ - Sean Afridi
Marius Mosbach, Anna Khokhlova, Michael A. Hedderich, and Dietrich Klakow. 2020. On the Interplay Between Fine-tuning and Sentence-level Probing for Linguistic Knowledge in Pre-trained Transformers. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2502â2516, Online. Association for Computational Linguistics. https://aclanthology.org/2020.findings-emnlp.227/
Durrani, N., Sajjad, H., & Dalvi, F. (2021, August). How transfer learning impacts linguistic knowledge in deep NLP models?. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 4947-4957). https://aclanthology.org/2021.findings-acl.438/ - Shubham Pandey
Yu, L., & Ettinger, A. (2021, August). On the Interplay Between Fine-tuning and Composition in Transformers. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 (pp. 2279-2293). https://aclanthology.org/2021.findings-acl.201/
Week 9 (October 24-): Bias 1
October 24: Student-led discussion on Fine-tuning
Sean Afridi
Shubham Pandey
October 24: Semantic Knowledge 2 paper review due
October 26: Instructor-guided discussion on identifying bias - Discussion due October 25 at 10pm
Ethayarajh, K., Duvenaud, D., & Hirst, G. (2019, July). Understanding Undesirable Word Embedding Associations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 1696-1705). https://aclanthology.org/P19-1166/
Victor Steinborn, Philipp Dufter, Haris Jabbar, and Hinrich Schuetze. 2022. An Information-Theoretic Approach and Dataset for Probing Gender Stereotypes in Multilingual Masked Language Models. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 921â932, Seattle, United States. Association for Computational Linguistics. https://aclanthology.org/2022.findings-naacl.69/
Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K. W. (2018, June). Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) (pp. 15-20). https://aclanthology.org/N18-2003/
Week 10 (October 31-): Bias 2
October 31: Class-led discussion on Bias 1 - Happy Halloween! đ
October 31: Fine-tuning paper review due
November 2: Instructor-guided discussion on debiasing - Discussion due November 1 at 10pm
Cheng, P., Hao, W., Yuan, S., Si, S., & Carin, L. (2020, September). FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders. In International Conference on Learning Representations. https://openreview.net/forum?id=N6JECD-PI5w
Liang, S., Dufter, P., & SchĂźtze, H. (2020, December). Monolingual and multilingual reduction of gender bias in contextualized representations. In Proceedings of the 28th International Conference on Computational Linguistics (pp. 5082-5093). https://aclanthology.org/2020.coling-main.446/
Prost, F., Thain, N., & Bolukbasi, T. (2019). Debiasing embeddings for reduced gender bias in text classification. arXiv preprint arXiv:1908.02810. https://aclanthology.org/W19-3810/
Week 11 (November 7-): Longer spans
November 7: Class-led discussion on Bias 2
November 7: Bias 1 paper review due
November 9: Instructor-guided discussion on longer spans - Discussion due November 8 at 10pm
Zhu, Z., Pan, C., Abdalla, M., & Rudzicz, F. (2020, November). Examining the rhetorical capacities of neural language models. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP (pp. 16-32). https://aclanthology.org/2020.blackboxnlp-1.3/
Kim, T., Choi, J., Edmiston, D., & Lee, S. G. (2019, September). Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction. In International Conference on Learning Representations. https://openreview.net/forum?id=H1xPR3NtPB&=1
Holtzman, A., Buys, J., Du, L., Forbes, M., & Choi, Y. (2019). The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751. https://openreview.net/forum?id=rygGQyrFvH
November 16: Instructor-guided discussion on contextual word representations - Discussion due November 15 at 10pm
Mengjie Zhao, Philipp Dufter, Yadollah Yaghoobzadeh, and Hinrich SchĂźtze. 2020. Quantifying the Contextualization of Word Representations with Semantic Class Probing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1219â1234, Online. Association for Computational Linguistics. https://aclanthology.org/2020.findings-emnlp.109
Ethayarajh, K. (2019, November). How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (pp. 55-65). https://aclanthology.org/D19-1006/ - Vishal Rajasekar
November 21: Student-led discussion of Contextual representations
Vishal Rajasekar
November 21: Contextual Representations 1 paper review due
November 23: Fall break! No class!
Week 14 (November 28-): Geometry
November 28: Instructor-guided discussion on metrics for interpretability
Marvin Kaster, Wei Zhao, and Steffen Eger. 2021. Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic Factors. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8912â8925, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. https://aclanthology.org/2021.emnlp-main.701/
Ferrando, J., GĂĄllego, G. I., & Costa-jussĂ , M. R. (2022). Measuring the Mixing of Contextual Information in the Transformer. arXiv preprint arXiv:2203.04212. https://arxiv.org/abs/2203.04212
November 28: Contextual Representations 2 paper review due
November 30: Instructor-guided discussion on geometric properties of word vectors - Discussion due November 29 at 10pm
Chen, B., Fu, Y., Xu, G., Xie, P., Tan, C., Chen, M., & Jing, L. (2020, September). Probing BERT in Hyperbolic Spaces. In International Conference on Learning Representations. https://openreview.net/forum?id=17VnwXYZyhH
William Timkey and Marten van Schijndel. 2021. All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4527â4546, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. https://aclanthology.org/2021.emnlp-main.372/
Zhou, K., Ethayarajh, K., Card, D., & Jurafsky, D. (2022, May). Problems with Cosine as a Measure of Embedding Similarity for High Frequency Words. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (pp. 401-423). https://aclanthology.org/2022.acl-short.45/
Chang, T. A., Tu, Z., & Bergen, B. K. (2022). The Geometry of Multilingual Language Model Representations. arXiv preprint arXiv:2205.10964. https://arxiv.org/abs/2205.10964
Week 15 (December 5-): Wrap-up
December 5: Final project presentations
December 7: Final project presentations
Paper presentation and discussion guidelines
60%: Presentation quality
Presentation is of sufficient length (~30 minutes of content or approximately 15-20 slides)
Clearly describes the goals of the paper
Describes the linguistic phenomena under question
Summarizes each of the experiments conducted in the paper and their results
Clearly states significance of the work with respect to the course questions
40%: Discussion quality
Highlights any uncertainty about linguistic phenomena in paper
Raises points for discussion from course discussion page
Raises additional points from own reading of the paper
Final project guidelines
A final project will help determine your mastery of the course material and refine your ability to initiate, design, analyze, and summarize unique research questions. The general topic of your project should be broadly related to the interpretability and probing literatures discussed in class. The work you do for your final project should be your own and not plagiarized or purely replicating work completed elsewhere (e.g., data science blogs) and must come with an associated thoughtful literature review detailing prior work and scientific motivation. You may attempt to replicate the results or reconstruct the models from published ACL proceedings paper on the basis of that paper alone.
Students who wish to complete a capstone project via 667 should note that the thoroughness of the literature review and a fair, in-depth analysis of the results is critical for the project completion and it is imperative that you meet with the instructor or update them regularly, beginning early in the course. If you intend to complete a capstone project, all relevant paperwork must be completed on time following the CSE or LIN department guidelines and you must tell the instructor the first week of class if it is your intention to complete a capstone project.
If you have a disability and may require some type of instructional and/or examination accommodation, please inform me early in the semester so that we can coordinate the accommodations you may need. If you have not already done so, please contact the Office of Accessibility Services (formerly the Office of Disability Services) University at Buffalo, 60 Capen Hall, Buffalo, NY 14260-1632; email: stu-accessibility@buffalo.edu Phone: 716-645-2608 (voice); 716-645-2616 (TTY); Fax: 716-645-3116; and on the web at http://www.buffalo.edu/studentlife/who-we-are/departments/accessibility.html. All information and documentation is confidential.
The University at Buffalo and the Graduate School of Education are committed to ensuring equal opportunity for persons with special needs to participate in and benefit from all of its programs, services and activities.
Academic Integrity:
Academic integrity is critical to the learning process. It is your responsibility as a student to complete your work in an honest fashion, upholding the expectations your individual instructors have for you in this regard. The ultimate goal is to ensure that you learn the content in your courses in accordance with UBâs academic integrity principles, regardless of whether instruction is in-person or remote. Thank you for upholding your own personal integrity and ensuring UBâs tradition of academic excellence.
It is expected that you will behave in an honorable and respectful way as you learn and share ideas. Therefore, recycled papers, work submitted to other courses, and major assistance in preparation of assignments without identifying and acknowledging such assistance are not acceptable. All work for this class must be original for this class. Please be familiar with the University and the School policies regarding plagiarism. Read the Academic Integrity Policy and Procedure for more information. Visit The Graduate School Policies & Procedures page (http://grad.buffalo.edu/succeed/current-students/policy-library.html) for the latest information.
Course Evaluations:
You will have two opportunities to provide anonymous feedback about the course. In the middle of the semester, I will send you a brief questionnaire asking about what activities are contributing to your learning and what might be done to improve your learning. At the conclusion of the semester you will receive an email reminder requesting your participation in the Course Evaluation process. Please provide your honest feedback; it is important to the improvement and development of this course. Feedback received is anonymous and I do not receive copies of the Evaluations until after grades have been submitted for the semester.
Counseling Services:
As a student you may experience a range of issues that can cause barriers to learning or reduce your ability to participate in daily activities. These might include strained relationships, anxiety, high levels of stress, alcohol/drug problems, feeling down, health concerns, or unwanted sexual experiences. Counseling, Health Services and Health Promotion are here to help with these or other issues you may experience. You can learn more about these program and services by contacting:
UB is committed to providing a safe learning environment free of all forms of discrimination and sexual harassment, including sexual assault, domestic and dating violence and stalking. If you have experienced gender-based violence (intimate partner violence, attempted or completed sexual assault, harassment, coercion, stalking, etc.), UB has resources to help. This includes academic accommodations, health and counseling services, housing accommodations, helping with legal protective orders, and assistance with reporting the incident to police or other UB officials if you so choose. Please contact UBâs Title IX Coordinator at 716-645-2266 for more information. For confidential assistance, you may also contact a Crisis Service Campus Advocate at 716-796-4399.
Please be aware UB faculty are mandated to report violence or harassment on the basis of sex or gender. This means that if you tell me about a situation, I will need to report it to the Office of Equity, Diversity and Inclusion. You will still have options about how the situation will be handled, including whether or not you wish to pursue a formal complaint. Please know that if you not wish to have UB proceed with an investigation, your request will be honored unless UBâs failure to act does not adequately mitigate the risk of harm to you or other members of the university community. You also have the option of speaking with trained counselors who can maintain confidentiality. UBâs Options for Confidentiality Disclosing Sexual Violence provides a full explanation of the resources available, as well as contact information. You may call UBâs Office of Equity, Diversity and Inclusion at 716-645-2266 for more information, and you have the option of calling that office anonymously if you would prefer not to disclose your identity.
Technology Recommendations
To effectively participate in this course, regardless of mode of instruction, the university recommends you have access to a Windows or Mac computer with webcam and broadband. Your best opportunity for success in the blended UB course delivery environment (in-person, hybrid and remote) will require these minimum capabilities.
Public health compliance in a classroom setting
UB student Behavioral Requirements in all Campus Public Spaces include:
Students are required to obtain and wear a high-quality, tight-fitting, high-filtration mask when aboard a UB bus or shuttle or in a clinical health care setting in accordance with current health and safety guidelines. Masks indoors and in other public campus settings are optional.
Students who are regularly on campus and not fully vaccinated are required to participate in surveillance testing.
Students are required to abide by New York State, federal and Center for Disease Control and Prevention (CDC) travel restrictions and precautionary quarantines.
Students are required to stay home if they are sick.
Students are required to follow campus and public health directives for isolation or quarantine.
Should a student need to miss class due to illness, isolation or quarantine, they are required to notify their faculty to make arrangements to make up missed work.
Living on campus is a privilege that comes with additional requirements. Residential students are required to follow specific Campus Living rules as outlined in the Campus Living Housing Agreement, the Guide to Campus Living and any posted signage.
Students dining at on-campus facilities are expected to follow posted information on any additional requirements specific to the dining environment.
Students are responsible for following any additional directives in settings such as labs, clinical environments etc.