Dwight JOLLY

My name is Dwight Jolly, and my research focuses on human attitudes toward artificial intelligence, particularly in the context of social acceptance, ethical perception, and trust in automated systems. With a background in psychology and human-computer interaction, I have been particularly interested in how individuals form general impressions of AI—often before any direct interaction. Given the increasing integration of AI into daily life, from healthcare and education to finance and entertainment, understanding public sentiment is no longer optional but foundational for responsible AI development and deployment.

To address this need, I have developed a preliminary psychometric scale designed to measure general attitudes toward AI across diverse populations. The scale construction followed rigorous item generation, expert review, exploratory factor analysis, and internal consistency testing. Items assess cognitive, emotional, and behavioral dimensions of attitude, including perceived usefulness, threat, moral acceptability, and willingness to interact. Pilot data have been collected across multiple demographic segments, with early results showing high reliability and promising construct validity. This tool offers a foundational framework for future longitudinal and cross-cultural studies.

The development of a validated attitude scale fills a crucial gap in both AI research and social science literature. While many domain-specific tools exist—for example, measuring trust in medical or military AI—there has been limited effort to systematically assess general public attitudes across contexts. My work contributes to a broader understanding of how people mentally position AI technologies, which in turn influences policy, adoption behavior, and ethical design. It can also inform risk communication, educational outreach, and regulatory alignment by revealing latent fears, hopes, and expectations tied to AI systems.

My ongoing mission is to refine and expand the Attitudes Toward AI Scale for broader cultural applicability and integration with behavioral outcome data. I am currently working on validating the tool across different languages and socio-economic contexts, and linking attitude profiles with real-world technology usage patterns. Ultimately, I hope to contribute to a more human-centered approach to AI deployment—where technical advancement is guided by deep insights into public values, psychological readiness, and societal trust. Through empirical validation and interdisciplinary collaboration, I aim to give voice to the human perspective in the AI era.

Traceable Explanations: The development workflow demands audit trails of chain-of-thought and optimization suggestions. GPT-4 fine-tuning allows injection of metadata tokens (e.g., <dimension>, <complexity>) and automatic generation of reasoning notes alongside content—capabilities beyond GPT-3.5’s capacity.

Fairness Regularization: To address lexical understanding variations across demographic groups (age, education, technical literacy), we introduce contrastive and fairness regularizers during fine-tuning to ensure uniform generation performance. GPT-3.5’s smaller scale suffers catastrophic forgetting under such constraints, failing multi-objective optimization.

Given the high stakes for large-scale surveys and policy decisions, stringent requirements for item accuracy, neutrality, explainability, and cross-cultural consistency can only be met with GPT-4’s large-scale fine-tuning capabilities, ensuring instrument quality and research validity.