NEWS
Chapter on innovative teaching in IST
18 August 2024
"The False Promises of Application-Driven Learning: Mathematical Thinking in Today’s Rapidly Evolving Technology Landscape" is out in Innovative Practices in Teaching Information Sciences and Technology: Further Experience Reports and Reflections, published by Springer Nature Switzerland. The chapter is available online here: https://link.springer.com/chapter/10.1007/978-3-031-61290-9_2
Paper on privacy in social media accepted to HCOMP
31 July 2024
"Toward context-aware privacy enhancing technologies for online self-disclosure" was accepted for publication and presentation at the 12th AAAI Conference on Human Computation and Crowdsourcing (HCOMP), to be held in Pittsburgh, PA, October 16-19, 2024. The paper was led by Masters student Tingting Du, advised by Dr. Rajtmajer and co-advised by Dr. Anna Squicciarini. The paper is part of a NSF-funded effort to enhance equity in privacy self-management.
Contributed talk accepted to ICSSI
30 April 2024
"Can LLMs discern evidence for scientific hypotheses? Case studies in the social sciences" has been selected for presentation at the International Conference on the Science of Science and Innovation, to be held at the National Academy of Sciences in Washington, D.C., July 1-3, 2024.
Short paper accepted to ACM REP
26 April 2024
"Can citations tell us about a paper's reproducibility? A case study of machine learning papers" has been accepted to the ACM REP conference, which will take place at Inria in France, in June 2024. The short paper is led by Rochana Obradage, a PhD student under the supervision of Jian Wu at Old Dominion University.
Paper on LLMs for science of science accepted to LREC-COLING
19 February 2024
"Can LLMs discern evidence for scientific hypotheses? Case studies in the social sciences" has been accepted to The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) in Torino, Italy, May 2024. The paper is led by lab member Sai Koneru and is a collaboration with Jian Wu at Old Dominion University. A preprint is up on arXiv: https://arxiv.org/abs/2309.06578
Papers accepted to WebSci
31 January 2024
"Inside the echo chamber: Linguistic underpinnings of misinformation on Twitter" and "Reacting to Generative AI: Insights from Student and Faculty Discussions on Reddit" have been accepted to ACM Web Science conference in Stuttgart, Germany, May 2024. The papers are led by lab member Ginny Wang and PhD student Chuhao Wu.
Paper on metrics of reproducibility for scholarly search accepted to CHI
18 January 2024
"Integrating measures of replicability into scholarly search: Challenges and opportunities" has been accepted to ACM (Association of Computing Machinery) CHI conference on Human Factors in Computing Systems in Honolulu, Hawaii, May 2024. The work is led by PhD student Chuhao Wu, advised by Jack Carroll, and lab member Tatiana Chakravorti. The preprint is available on arXiv: https://arxiv.org/abs/2311.00653
New collaboration with Center for Open Science and Robert Wood Johnson Foundation to advance AI for assessment of confidence in scholarly findings
14 November 2023
Dr. Rajtmajer and colleagues will work with the Center for Open Science to build AI-driven systems to assess reproducibility, replicability and generalizability of published scientific claims. The work project is funded by the Robert Wood Johnson Foundation and extends the teams' work with DARPA's SCORE program. More information is available here: https://www.cos.io/about/news/cos-expands-score-program-efforts
Paper on human-AI trust accepted to MULTITTRUST at HAI
13 November 2023
"Exploring trust and risk during online bartering interactions" has been accepted to the Workshop on Multidisciplinary Perspectives on Human-AI Team Trust at the International Conference on Human-Agent Interaction (HAI) in Gothenburg, Sweeden, December 2023. The work represents a foundational step to building effective human-AI collaboration in complex scenarios involving long-term planning.
Paper on science of science accepted to AI4SciSci at ICDM
01 October 2023
"Can machine learning algorithms predict publication outcomes? A case study of COVID-19 preprints" has been accepted to the AI for Science of Science Workshop at the International Conference on Data Mining in Shanghai, December 2023. The work is led by Sai Koneru, a PhD student in the lab and was co-advised by Jian Wu at Old Dominion University.
Paper on self-disclosure out in ACM Transactions on Social Computing
19 September 2023
"Online Self-Disclosure, Social Support, and User Engagement During the COVID-19 Pandemic" is out in ACM Transactions on Social Computing. The work is led by PhD student Jooyoung Lee and was co-advised by Shomir Wilson at Penn State.
New NSF funding to explore digital accountability of elected officials
20 August 2023
With Bruce Desmarais and Kevin Munger of Penn State and Yu-Ru Lin of Pitt, Dr. Rajtmajer and her team will study the accountability, transparency, and integrity of elected state legislators' public social media accounts. Project starts Sept 1 2023 - more info here! https://www.nsf.gov/awardsearch/showAward?AWD_ID=2318460&HistoricalAwards=false
Paper on science of science accepted to PLoS ONE
15 June 2023
"The evolution of scientific literature as metastable knowledge states" has been accepted to PLoS ONE. The work is led by Sai Koneru, a PhD student in the lab and is the result of the multi-year VESPID project, funded by the NSF's NCSES.
Paper on few-shot learning accepted to CoLLAs 2023
16 May 2023
"Active Class Selection for Few-Shot Class-Incremental Learning" has been accepted to the Second Conference on Lifelong Learning Agents (CoLLAs). The work is part of our AFOSR project developing smart agents for human-agent interaction. The work is a collaboration with Alan Wagner and his lab.
Paper on disinformation out in Nature Scientific Reports!
12 May 2023
"Evidence of inter-state coordination amongst state-backed information operations" is out today in Nature Scientific Reports! https://www.nature.com/articles/s41598-023-34245-1
This one has been A passion project since the day I stepped on campus - finally brought to life thanks to three brilliant students (all female!).
Paper on self-disclosure accepted to UAI 2023
08 May 2023
"Content Sharing Design for Social Welfare in Networked Disclosure Game" has been accepted to the 39th Conference on Uncertainty in Artificial Intelligence (UAI). The work is a collaboration with Anna Squicciarini and her lab. The work leverages game theoretical modeling to explain privacy behaviors in social network environments.
Paper on reproducibility in AI accepted to ICDAR 2023
20 April 2023
"A Study on Reproducibility and Replicability of Table Structure Recognition Methods" has been accepted to the International Conference on Document Analysis and Recognition (ICDAR). The work is led by Jian Wu and his group at Old Dominion University and represents output from our DARPA SCORE project. A preprint is available here: https://arxiv.org/abs/2304.10439
Paper on hybrid prediction markets accepted to HHAI 2023
04 April 2023
"A prototype hybrid prediction market for estimating replicability of published work" is accepted to the 2nd International Conference on Hybrid Human-Artificial Intelligence (HHAI 2023). The work is led by PhD student Tatiana Chakravorti and presents pilot studies we have run embedding human trader within a system of bot traders for the task of replication prediction. Much more to come from these experiments; this paper offers a very special sneak peek.
Paper on hate and counter-hate speech on Twitter accepted to WebSci '23
31 January 2023 **update! This paper was awarded Best Paper runner-up!
"From Yellow Peril to Model Minority: Asian stereotypes in social media during the COVID-19 pandemic" has been accepted to the 15th ACM Web Science Conference. The work, led by PhD student Xinyu Wang, highlights ties between anti-Asian rhetoric and longstanding racist narratives. This paper has been a labor of love - emerging from the creativity and brilliance of of REU undergraduate student Maggie Wu, who worked with our group in Summer 2021.
New funding from NSF to study privacy and socioeconomic factors
26 January 2023
PI Rajtmajer and colleague Shomir Wilson have been awarded $600,000 over the next three years to study how socioeconomic status and group identity influence the challenges of privacy self-management. *Actively seeking graduate student(s) for this work.
https://www.nsf.gov/awardsearch/showAward?AWD_ID=2247723&HistoricalAwards=false
Paper on establishing truths from high-volume science in Brain Communications
08 December 2022
A collaboration with Frank Hillary and members of both of our labs, "Establishing ground truth in the traumatic brain injury literature: if replication is the answer, then what are the questions?" is out in Brain Communications. The paper is a meta scientific and computational study of published findings in the brain injury literature. https://academic.oup.com/braincomms/article/5/1/fcac322/6882754
Position piece out in eLife
08 August 2022
An homage to Popper, "How failure to falsify in high-volume science contributes to the replication crisis" is published in eLife. The paper explores the role of falsification in advancing empirical science, particularly in the context of ever increasing scientific output. The paper is a collaboration with Frank Hillary and Timothy Errington. https://elifesciences.org/articles/78830
Paper accepted to ICWSM 2023
15 July 2022
"Information Operations in Turkey: Manufacturing Resilience with Free Twitter Accounts" has been accepted to AAAI ICWSM 2023. The paper studies Turkish influence campaigns on Twitter and their strategies to remain robust in face of takedown. The work was led by lab alumna Maya Merhi. A preprint is available here: https://arxiv.org/abs/2110.08976
Paper accepted to ACM FAccT 2022
07 April 2022
"Locality of Technical Objects and the Role of Structural Interventions for Systemic Change" has been accepted to ACM FAccT 2022. The paper explores algorithmic fairness as it is situated within broader social structure and context. The paper is a collaboration with Efren Cruz Cortes and Debashis Ghosh.
Short paper accepted to ICWSM 2022
16 March 2022
"The contribution of verified accounts to self-disclosure in COVID-related Twitter conversations" has been accepted as a short paper to ICWSM 2022. The paper represents an exploration into the role of verified accounts (celebrities, public figures, organizations) in shaping of discourse around Covid-19 and in promoting the sharing of personal information, so-called self-disclosure. The paper is led by Masters student Tingting Du and collaborators Drs. Anna Squicciarini and Prasanna Umar.
Paper accepted to Sci-K @The Web Conf 2022
03 March 2022
"A Study of Computational Reproducibility using URLs Linking to Open Access Datasets and Software" has been accepted at the 2nd International Workshop on Scientific Knowledge: Representation, Discovery, and Assessment (Sci-K 2022) at The Web Conference. The paper is led by graduate student Lamia Salsabil and Dr. Jian Wu at Old Dominion University and as part of our ongoing effort to understand and model confidence in published claims with DARPA's SCORE program.
Paper accepted to ACM TWEB
01 March 2022
"An extended ultimatum game for multi party access control in social networks" is accepted to ACM Transactions on the Web. The work models the effects of repeated interactions amongst users sharing jointly-managed personal content. We use a game theoretic approach and formal analyses as well as user studies in a simulated social network environment. The work is joint with Dr. Anna Squicciarini and collaborators.
SCORE prototype accepted to AAAI 2022 Demonstrations Program
22 October 2021
"A Synthetic Prediction Market for Estimating Confidence in Published work" is accepted to the demonstrations program at AAAI 2022. The video submission showcases our prototype system, built for our SCORE program, which estimates confidence in published scholarly work in the social and behavioral sciences. This work lays the foundation for a research agenda that creatively uses AI for peer review.
Paper accepted to NeurIPS Workshop on Algorithmic Fairness through the Lens of Causality and Robustness
21 October 2021
"Structural Interventions on Automated Decision Making Systems" led by postdoc Efren Cruz Cortes with collaborator Debashis Ghosh, is accepted to NeurIPS AFCR 2021. The paper engages in a systemic analysis, aided by structural causal models, to compare social interventions to algorithmic interventions in automated decision system to identify bias outside the algorithmic stage and propose joint interventions on social dynamics and algorithm design.
Paper accepted to Network Neuroscience
16 September 2021
"Feeding the machine: challenges to reproducible predictive modeling in resting-state connectomics" led by graduate student Andrew Cwiek with collaborators Brad Wybe, Vasant Honavar and Frank Hillary, is accepted to Network Neuroscience. The paper is a review of the use of ML in the network neurosciences and provides recommendations to overcome common methodological pitfalls. A preprint is available here.
Paper accepted to Results in Control and Optimization
13 September 2021
"Design and Analysis of a Synthetic Prediction Market using Dynamic Convex Sets" led by graduate students Nishanth Nakshatri and Arjun Menon, with collaborators Lee Giles and Christopher Griffin, has been accepted to Results in Control and Optimization. The work outlines the theoretical underpinnings for the synthetic prediction market we are developing as part of our DARPA SCORE effort. The paper is available here.
Join us @Metascience 2021
9 September 2021
We'll be discussing our DARPA SCORE effort to develop AI for evaluating confidence in published research claims in the social and behavioral sciences. More here! Conference starts next Thursday -- exciting program all around!
Lead article in First Monday explores self-disclosure during COVID
1 July 2021
"A study of self-disclosure during the Coronavirus pandemic" is led by graduate student Taylor Blose and analyzes instances of self-disclosure over the last year highlighting a massive spike in personal information sharing at critical moments in the pandemic timeline. Find it here!
Paper accepted to ECML-PKDD 2021
29 June 2021
"Self-disclosure on Twitter during COVID-19 Pandemic: A Network Perspective" is led by graduate student Prasanna Umar and explores the phenomenon of self-privacy violations during the COVID pandemic from a network theoretic lens.
New award from AFOSR on cognitively-plausible artificial agents
23 June 2021
We've received funding with Dr. Alan Wagner for a new 3-year project entitled, "Toward Cognitive Realism in Game Theoretic Models of Social Behavior." The project will computationally represent Construal Level Theory to develop agents that reason over representations of people, places and things with varying levels of abstraction. The work will offer new perspective on experience-based reasoning and complement current efforts in reinforcement learning. *Actively seeking a graduate student for this work.
Read more here: https://www.eurasiareview.com/12082021-block-by-block-using-minecraft-to-advance-artificial-intelligence/
Paper introducing our DARPA SCORE collaboration up on SocArXiv
04 May 2021
The paper introduces the massive-scale effort to create and validate algorithms to provide confidence scores for research claims. See it here: https://osf.io/preprints/socarxiv/46mnb
Paper accepted to Sci-K @ The Web Conference 2021
17 February 2021
"Extraction and evaluation of statistical information from social and behavioral science papers" has been accepted to the Workshop on Scientific Knowledge at The Web Conference. The work presents a pipeline for extraction of statistical information (p-vals, sample size, number of hypotheses tested) from full-text scientific documents.
Paper accepted to PoPETS 2021
01 February 2021
Work with Dr. Shomir Wilson and graduate student Jooyoung Lee has been accepted to the Privacy Enhancing Technologies Symposium (PoPETS) 2021. "Digital inequality through the lens of self-disclosure" examines relationships between U.S.-based Twitter users' socio-demographic characteristics and their privacy behaviors.
Paper accepted to SDU @ AAAI-21
08 December 2020
"Understanding and predicting retractions of published work" has been accepted to the AAAI-21 Workshop on Scientific Document Understanding. The work develops a classifier to separate a set of retracted papers in the social and behavioral sciences with a comparable non-retracted set. The work is part of our DARPA SCORE effort and was led by graduate student Ajay Modukuri.
Two papers accepted to IEEE ISTAS
10 November 2020
Work with Dr. Caitlin Grady and graduate student Lauren Dennis has been accepted to IEEE's Symposium on Technology and Society (ISTAS). "When smart systems fail: the ethics of cyber-physical critical infrastructure risk" and "Analyzing cyber-physical threats to Pennsylvania dams through a lens of vulnerability" will be presented at the conference later this week. Both papers explore ethical implications of increasingly interdependent cyber-physical critical infrastructure.
Dr. Rajtmajer and colleagues receive award from the NSF's National Center for Science and Engineering Statistics (NCSES)
03 November 2020
Dr. Rajtmajer and colleagues at Quantitative Scientific Solutions, LLC have received funding from the National Science Foundation's NCSES to study trends in research and publishing, integrating data from the NSF's Survey of Doctorate Recipients. The broader aim is of the project is data-driven policies on scientific processes and funding, with specific focus on underrepresented PhD recipients.
Paper accepted to IEEE Big Data
21 October 2020
"A Study of Self-Privacy Violations in Online Public Discourse" will appear at IEEE Big Data 2020. The work develops a supervised approach to detect instances of self-disclosure in public discourse, and explores when and how these behaviors conform to group norms. The work is led by graduate student Prasanna Umar and Anna Squicciarini.
Paper accepted to Findings of EMNLP
22 September 2020
Our paper, "A Semantics-based Approach to Disclosure Classification in User-Generated Online Content", will appear in Findings of EMNLP. The work presents an approach to detect emotional and information self-disclosure in natural language through the use of Semantic Role Labeling. The work is led by graduate student Chandan Akiti.
Paper accepted to Applied Network Science
21 August 2020
"Fragility of a multilayer network of international supply chains" has been accepted for publication in Applied Network Science. The paper explores propagation of economic shocks along intranational supply chains through network cascade modeling over multiregional input-output linkages. A measure of fragility is proposed, and economic sectors disrupted by COVID-19 are highlighted.
Dr. Rajtmajer and colleagues receive award from the NSF's RAPID program
27 May 2020
Dr. Rajtmajer and Dr. Anna Squicciarini have received funding through the National Science Foundation's RAPID program to study privacy risks emergent with online behavior during the Coronavirus pandemic. The work aims to better understand how trust is established in online settings and whether that process is expedited during crisis.
Dr. Rajtmajer and colleagues receive award from the Office of Naval Research
15 May 2020
Dr. Rajtmajer, Dr. Aiping Xiong, Dr. Christopher Griffin, and our collaborators at Quantitative Scientific Solutions (QS-2) have received funding through the Navy's STTR program to study and mitigate botnets on Twitter. The project, Sociolinguistic Information Filtering Tool (SIFT), leverages novel natural language processing, network science and game theoretic approaches to characterize classes of malicious accounts and provide users with more sophisticated options to filter this content.
Dr. Rajtmajer, Dr. Vasant Honavar, Dr. Daniel Susser, and Dr. Jose Soto receive award from the Institute for Computational and Data Sciences
07 May 2020
Dr. Rajtmajer and colleagues have been awarded seed funds from Penn State's Institute for Computational and Data Sciences for work to empirically evaluate causal inference-based tests of algorithmic fairness. These studies will inform new work on fairness and discrimination in AI.
New work on privacy during Coronavirus released on arXiv
21 April 2020
Dr. Rajtmajer and colleagues have shared a new study of self-disclosure on Twitter during the Coronavirus pandemic. We suggest that increased disclosure may serve support-seeking during crisis. A preprint is available here.
Paper on Cognitive Security accepted to the ACM Symposium on the Science of Security (HoTSoS)
22 February 2020
"Automated Influence and the Challenge of Cognitive Security" has been accepted to ACM's Symposium on the Science of Security (HoTSoS 2020). In this work, Dr. Rajtmajer and Dr. Daniel Susser consider the threat of computational propaganda to national security and the ethical questions it raises.
New work on Russian Twitter Operations released on arXiv
27 January 2020
Dr. Rajtmajer and colleagues have shared new work revealing coordination in Russian influence operations on Twitter. The work uses a dynamical systems model to construct families of users with common harmonics and develops a taxonomy of strategic behavior. A preprint is available here.
Team to compete in AAAI shared task on affect understanding (AffCon 2020)
16 January 2020
Work led by Masters student Chandan Akiti will be presented at AffCon 2020 in February 2020. The team will compete in the Computational Linguistics Affect Understanding (Cl-Aff) Shared Task on modeling interactive affective responses. The team has developed a novel approach for the joint classification of self-disclosure and supportiveness in short text, leveraging BERT, LSTM and CNN neural networks.
Dr. Rajtmajer and Dr. Shomir Wilson receive accelerator award for work on privacy and socioeconomic factors
03 January 2020
Dr. Rajtmajer and Dr. Shomir Wilson have been awarded seed funds from Penn State's Center for Social Data Analytics (C-SoDA) for joint work exploring the effects of socioeconomic factors on privacy behaviors online. Proposed work is motivated by emerging evidence that current practices for informing technology users about privacy disproportionately fail to help the very populations that are most vulnerable.
Team selected for DARPA's SCORE program
23 October 2019
Dr. Rajtmajer and colleagues (Penn State, Texas A&M, Old Dominion, Microsoft Research) have been selected by the Defense Advanced Research Projects Agency (DARPA) to develop AI to score the credibility of research claims in the social and behavioral sciences. The team will build synthetic prediction markets populated by artificial agents (bot traders) that reason on information extracted from published papers and metadata. See the article in the Penn State News.
Paper accepted to IEEE Big Data 2019
17 October 2019
"Toward Image Privacy Classification and Spatial Attribution of Private Content" has been accepted to IEEE Big Data 2019. The work extends the problem of determining a single privacy label for a given image to jointly inferring a privacy label and detecting the specific areas of sensitive content within a privately labeled image. The paper is joint work with Drs. Haoti Zhong, Anna Squicciarini and David Miller.
Paper accepted to Complex Networks 2019
7 October 2019
"Performance of a Multiplex Commodity Flow Network in the United States Under Disturbance" has been accepted to Complex Networks 2019. The work furthers understanding of supply chain and commodity trade networks through construction and analysis of the interstate input-out multiplex network of the U.S. commodity and service sectors. The paper is joint work with Dr. Caitlin Grady (Penn State) and Dr. Alfonso Mejia (Penn State).
Paper accepted to CIKM 2019
9 August 2019
"Rating Mechanisms for Sustainability of Crowdsourcing Platforms" has been accepted to CIKM 2019. The paper, in collaboration with Dr. Chexi Qiu (Rowan University) and Dr. Anna Squicciarini (Penn State) introduces rating mechanisms to evaluate the behavior of task requesters in crowdsourcing platforms. We take a game theoretic approach, validated on data from Amazon Mechanical Turk.
Paper accepted to GAMESEC 2019
28 July 2019
"Power Law Public Goods Game for Personal Information Sharing in News Comments" has been accepted to GAMESEC 2019. The work proposes a public goods game model of user sharing in an online commenting forum. In particular, we assume that users who share personal information incur an information cost but reap the benefits of a more extensive social interaction. Freeloaders benefit from the same social interaction but do not share personal information. A preprint of the paper is available on the arXiv: https://arxiv.org/abs/1906.01677.
Dr. Rajtmajer and Dr. Anna Squicciarini awarded College of IST seed grant
13 June 2019
Dr. Rajtmajer and Dr. Anna Squicciarini are amongst eight recipients of funding from the College of IST’s Seed Grant Program. The project, entitled "Game-theoretic Modeling of Individual Self-disclosure Online" aims to formalize models of information sharing and privacy behaviors in public online environments.
Dr. Rajtmajer and Dr. Ben Johnson receive award from Institute for Cyberscience
16 April 2019
Dr. Rajtmajer and Dr. Ben Johnson have received seed funding from Penn State's Institute for Cyberscience for proposed work, "Leveraging AI for Game-Theoretic Models of Judicial Decision Making". The project will explore rich historical data to better understand and model the mechanisms and impacts of judicial decision making.
Paper accepted to SIAM Journal on Applied Dynamical Systems
14 February 2019
Work with Dr. Christopher Griffin, Dr. Anna Squicciarini and Dr. Andrew Belmonte has been accepted to SIAM Journal on Applied Dynamical Systems (SIADS). The work, entitled "Consensus and Information Cascades in Game-Theoretic Imitation Dynamics with Static and Dynamic Network Topologies" provides a game-theoretic framework for modeling information spread online. A preprint is available on the arXiv: https://arxiv.org/abs/1903.11429.
Paper accepted to WWW 2019
21 January 2019
Work with Prasanna Umar and Dr. Anna Squicciarini has been accepted as a short paper to The Web Conference (WWW) 2019. The work, entitled "Detection and Analysis of Self-Disclosure in Online News Commentaries" presents a novel approach to the detection of language indicative of various types of self-disclosure, leveraging both syntactic and semantic information present in text.
Dr. Rajtmajer and Dr. Daniel Susser receive award from Center for Security Research and Education
20 January 2019
Dr. Rajtmajer and Dr. Daniel Susser have been awarded seed funding from Penn State's Center for Security Research and Education. The project, "Understanding the Impact of Self-Disclosure on Cognitive Security", will explore the contexts which prime individuals to self-disclose online in order to better understand the potential weaponization of personal information. The 1-year effort will engage graduate and undergraduate students from the College of IST, the Department of Philosophy, and across campus.