Many couples experience long-distance relationships (LDRs), and "couple technologies" have been designed to influence certain relational practices or maintain them in challenging situations. Our work also contributes to fill the research gap on how a chatbot may influence the whole community's engagement. With a follow-up field experiment, CASS is proven useful in supporting individual members who seek emotional support. CASS is also generalizable as it can be easily migrate to other online communities. The CASS architecture is based on advanced neural network algorithms, thus it can handle new inputs from users and generate a variety of responses to them. In this paper, we develop a generalizable chatbot architecture (CASS) to provide social support for community members in an online health community. This opens a discussion for how toxicity detection models work and should work, and their effect on the future of online discourse.Ĭhatbots systems, despite their popularity in today's HCI and CSCW research, fall short for one of the two reasons: 1) many of the systems use a rule-based dialog flow, thus they can only respond to a limited number of pre-defined inputs with pre-scripted responses or 2) they are designed with a focus on single-user scenarios, thus it is unclear how these systems may affect other users or the community. In addition, we found that when users focus on optimizing language for these models instead of their own judgement (which is the implied incentive and goal of deploying automated models), these models cease to be effective classifiers of toxicity compared to human annotations. Users also gained a stronger understanding of the underlying toxicity criterion used by black-box models, enabling transparency and recourse. ![]() We examined the effect of RECAST via two large-scale user evaluations, and found that RECAST was highly effective at helping users reduce toxicity as detected through the model. RECAST highlights text responsible for classifying toxicity, and allows users to interactively substitute potentially toxic phrases with neutral alternatives. Our work also provides users with a new path of recourse when using these automated moderation tools. We present our work, RECAST, an interactive, open-sourced web tool for visualizing these models' toxic predictions, while providing alternative suggestions for flagged toxic language. However, most automated systems-when detecting and moderating toxic language-do not provide feedback to their users, let alone provide an avenue of recourse for these users to make actionable changes. With the widespread use of toxic language online, platforms are increasingly using automated systems that leverage advances in natural language processing to automatically flag and remove toxic comments. The results reveal under-use of processes available to sustain and improve an organisation’s docuverse and a gap in organisational roles and skill-sets to apply those processes. The research also considers collaborative hypertexts in the context of social machines with regard to sustaining organisational knowledge as hypertext content. Using Wikipedia as a context, this thesis investigates whether large collaborative hypertexts show signs of their contributors using deliberate hypertextual structure or are simply connecting ‘pages’ of digital content. Constantly updated by humans and bots, it is an ever-changing knowledge store. Wikipedia is the world largest public hypertext knowledge base. For a hypertext docuverse that holds changing information, such as a knowledge base, paying heed to its hypertextual structure aids the long-term health and sustainability of the knowledge it contains. Despite subsequent advances in Web technology, some of the older hypertextual capabilities remain unrealised and hypertext/media appears to be treated more as a technology than a medium. Hypertextual in nature, the Web in its earliest form was technically limited and not capable of using the full richness of hypertext at that time.
0 Comments
Leave a Reply. |