-
Notifications
You must be signed in to change notification settings - Fork 2
Expand file tree
/
Copy pathproj-nsf-career.html
More file actions
187 lines (145 loc) · 9.88 KB
/
proj-nsf-career.html
File metadata and controls
187 lines (145 loc) · 9.88 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html" charset="utf-8" />
<title>DM2 Lab: Data Mining towards Decision Making</title>
<!-- Favicon -->
<link rel="shortcut icon" href="images/favicon.gif" />
<!-- Standard reset, fonts and grids -->
<link rel="stylesheet" type="text/css" href="styles/reset-fonts-grids.css">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.4.0/css/font-awesome.min.css">
<!-- styles for the whole website -->
<link href="styles/styles.css" rel="stylesheet" type="text/css" />
<!-- scripts -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
<script src="http://www.francois-petitjean.com/main.js" type="text/javascript"></script>
</head>
<body class="yui-skin-sam" id="yahoo-com">
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
<div id="doc" class="yui-t1">
<div id="hd">
<div id="header"><a href="https://www.nd.edu/"><img id="nd-logo" src="images/logo_nd.png" alt="CSE@NotreDame" /></a></div>
</div>
<h1>NSF CAREER: Synergistic Approaches for Specialized Intelligent Assistance </h1>
<h2>Project Description (NSF IIS-2142827)</h2>
<div>Intelligent assistance systems currently lack the in-depth knowledge needed to automatically provide effective responses in specialized domains, such as emotional support on social media. Manually creating specialized knowledge bases or a one-fits-all model is expensive and infeasible. Existing research on intelligent assistance systems tackles three important sub-problems, user modeling, information extraction, and text generation, but considers these problems to be separate and so addresses them with separate methods. The underlying assumption is that there is no need for cross-utilization of the information needed to address or the knowledge learned by addressing each sub-problem. For instance, knowledge bases or knowledge graphs need no or little expansion by information extraction methods to obtain all the facts; similarly, language models would have been well trained for generating an answer to the factual question and need no information from the other methods. This underlying assumption is too simplistic and does not hold for specialized intelligent assistance. This project addresses this limitation by discovering and utilizing synergies in user modeling, information extraction, and text generation. The PI designs, develops, and evaluates novel algorithms to assist individuals who are suffering from anxiety, depression, and other types of mental issues and who are seeking help on social media. Furthermore, this research supports the cross-disciplinary development of a diverse cohort of PhD and undergraduate students at Notre Dame.</div>
<br>
<div>The proposed algorithms mutually enhance one another by sharing knowledge. The technical aims of the project are divided into three thrusts. The first thrust develops novel information extraction methods to enhance the construction of mental health ontologies from social media data. These methods convert unstructured social media data into structured data for efficient retrieval and learning. The second thrust develops novel natural language generation techniques to create textual responses with user models and ontologies, enabling personalization and knowledge awareness. The third thrust enhances user models with novel contextualized representation learning algorithms that learn from user behavior data and structured knowledge. The proposed algorithms preserve the spatio-temporal behavioral patterns of users and their generated content to more precisely reflect their situations and needs.</div>
<br>
<div>We are grateful for NSF support to make this project possible!</div>
<h2>Faculty</h2>
<table>
<td width="150" height="155">
<img src="lab/images/meng.jpg" height="150" />
</td>
<td width="600">
<div><a href="http://www.meng-jiang.com">Meng Jiang</a></div>
</td>
</table>
<h2>Research Assistants</h2>
<table>
<td width="150" height="155">
<img src="lab/images/lingbo.jpg" height="150" />
</td>
<td width="850">
<div><a href="https://psychology.nd.edu/graduate-students/lingbo-tong/">Lingbo Tong</a></div>
</td>
</table>
<table>
<td width="150" height="155">
<img src="lab/images/hy.jpg" height="150" />
</td>
<td width="850">
<div><a href="https://www.hygiadang.com/">Hy Dang</a></div>
</td>
</table>
<table>
<td width="150" height="155">
<img src="lab/images/mengxia.jpg" height="150" />
</td>
<td width="850">
<div><a href="https://scholar.google.com.pr/citations?user=9d9qJt8AAAAJ">Mengxia Yu</a></div>
</td>
</table>
<table>
<td width="150" height="155">
<img src="lab/images/wenhao.jpg" height="150" />
</td>
<td width="850">
<div><a href="https://wyu97.github.io/">Wenhao Yu</a></div>
</td>
</table>
<table>
<td width="150" height="155">
<img src="lab/images/weike.jpg" height="150" />
</td>
<td width="850">
<div><a href="https://www.linkedin.com/in/weikefang/">Weike Fang</a>: REU</div>
</td>
</table>
<h2>Broader Impact: Highschool Students</h2>
<ul>
<li class="O"><a href="#">Albert Lu</a>: Culver Academies</li>
<li class="O"><a href="#">Ishita Masetty</a>: Penn High</li>
<li class="O"><a href="#">Jake Ciliberti</a>: Penn High</li>
</ul>
<h2>Publications</h2>
<ul>
<li class="O"><a href="#">Modality-Aware Neuron Pruning for Unlearning in Multimodal Large Language Models</a>
<i>Annual Meetings of the Association for Computational Linguistics (<b>ACL</b>)</i>, 2025.
</li>
<li class="O"><a href="#">Disentangling Biased Knowledge from Reasoning in Large Language Models via Machine Unlearning</a>
<i>Annual Meetings of the Association for Computational Linguistics (<b>ACL</b>)</i>, 2025.
</li>
<li class="O"><a href="https://arxiv.org/abs/2410.22108">Protecting Privacy in Multimodal Large Language Models with MLLMU-Bench</a>
<i>Annual Conference of the North American Chapter of the Association for Computational Linguistics (<b>NAACL</b>)</i>, 2025.
</li>
<li class="O"><a href="https://arxiv.org/abs/2502.08745">IHEval: Evaluating Language Models on Following the Instruction Hierarchy</a>
<i>Annual Conference of the North American Chapter of the Association for Computational Linguistics (<b>NAACL</b>)</i>, 2025.
</li>
<li class="O"><a href="https://arxiv.org/abs/2406.10471">Personalized Pieces: Efficient Personalized Large Language Models through Collaborative Efforts</a>
<i>Conference on Empirical Methods in Natural Language Processing (<b>EMNLP</b>)</i>, 2024.
</li>
<li class="O"><a href="https://arxiv.org/abs/2402.04401">Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning</a>
<i>Conference on Empirical Methods in Natural Language Processing (<b>EMNLP</b>)</i>, 2024.
</li>
<li class="O"><a href="https://arxiv.org/abs/2402.10058">Towards Safer Large Language Models through Machine Unlearning</a>
<i>Findings of Annual Meetings of the Association for Computational Linguistics (<b>ACL</b>)</i>, 2024.
</li>
<li class="O"><a href="https://knowledge-nlp.github.io/aaai2023/papers/006-MHKG-oral.pdf">Improving Mental Health Support Response Generation with Event-based Knowledge Graph</a>
<i>Workshop on Knowledge-Augmented Methods for NLP (KnowledgeNLP)</i> at
<i>AAAI Conference on Artificial Intelligence (AAAI)</i>, 2023.
</li>
<li class="O"><a href="https://arxiv.org/abs/2209.10063">Generate rather than Retrieve: Large Language Models are Strong Context Generators</a>
<i>International Conference on Learning Representations (<b>ICLR</b>)</i>, 2023.
</li>
<li class="O"><a href="https://arxiv.org/abs/2204.03508">A Survey of Multi-task Learning in Natural Language Processing: Regarding Task Relatedness and Training Methods</a>
<i>Conference of the European Chapter of the Association for Computational Linguistics (<b>EACL</b>)</i>, 2023.
</li>
<li class="O"><a href="https://aclanthology.org/2022.emnlp-main.43/">A Unified Encoder-Decoder Framework with Entity Memory</a>
<i>Empirical Methods on Natural Language Processing (<b>EMNLP</b>)</i>, 2022.
</li>
<li class="O"><a href="https://aclanthology.org/2022.emnlp-main.294/">Retrieval Augmentation for Commonsense Reasoning: A Unified Approach</a>
<i>Empirical Methods on Natural Language Processing (<b>EMNLP</b>)</i>, 2022.
</li>
<li class="O"><a href="https://aclanthology.org/2022.findings-acl.149/">Diversifying Content Generation for Commonsense Reasoning with Mixture of Knowledge Graph Experts</a>
Findings of <i>Annual Meeting of the Association for Computational Linguistics (<b>ACL</b>)</i>, 2022.
</li>
<li class="O"><a href="https://aclanthology.org/2022.findings-acl.150/">Dict-BERT: Enhancing Language Model Pre-training with Dictionary</a>
Findings of <i>Annual Meeting of the Association for Computational Linguistics (<b>ACL</b>)</i>, 2022.
</li>
</ul>
<br><br><br><br><br>
<br><br><br><br><br>
<table>
<td width="100" height="100">
<img src="images/nsf.jpg" width="100" />
</td>
<td width="100" height="100">
<img src="images/ndengineering.png" width="100" />
</td>
</table>
</div>
<br><br><br><br><br>
</body>