The Potential To Use Artificial Intelligence For Large-scale Field Experiments
- Post by: Junyi Li
- July 1, 2021
- No Comment
Junyi’s Graffiti Wall
All the while I was wishing that my college classmates would one day become senior managers in companies so that I could achieve “data-driven freedom”.
As a junior researcher, access to data is always a huge challenge. There is much helplessness among Ph.D. students complaining about making bricks without straw. Thus, to make my ideas grow again, I’m contemplating a potential solution that bypasses those data gatekeepers, because they will likely not respond to my requests for data access except for clear business purposes. I hope these ideas might have their chance to improve data analysis and paper revisions.
Back to the key, online communities are an important context in the IS literature. Traditionally, researchers need to contact internal managers to get approval to run an experiment, known as the A/B test in the industry. Alternatively, researchers may conduct lab experiments, but with a limited sample size. However, few people conduct this kind of field experiment from outside of the company. For example, using a crawler, it is easy to collect the list of poorly treated individuals and an associated/matched control group. Thus, it is possible to use artificial intelligence algorithms (e.g., chat robots) to treat those users. After the treatment, subjects who didn’t realize they were in the experiment could be tracked for subsequent behavioral changes. The introduction of artificial intelligence is to make valid operations automatically, disguising them as natural occurrences. We might achieve the same effect as a manual operation but at significantly lower costs. Further, due to lower cost, field experiments can be launched at a massive large scale with high frequency. This method is inspired by an e-commerce paper in which the author creates fake reviews via an automatic program so that they can introduce exogenous shock into the rank/location of review. (Unfortunately, I tried my best to search for this paper but failed. If I meet it again, I will update the quote here.)
For example, to explore whether increased user exposure will lead to an increase in user engagement (i.e., social exposure->user engagement), we can post machine-generated text that refers to the target user (e.g., @user). Since we don’t want to spam the users to increase user exposure, we need to generate text that matches the user’s interests, which can be learned from the user’s active groups (copying content from the user’s active group might also work). Another research question in my mind is whether opposing voices would increase user engagement (i.e., social friction->user engagement). To address this research question, we can randomly send users comments that are contrary to them, which is learned from existing opposing voices. Further, there may exist heterogeneity among different types of opposing voices, such as technical objections and emotional objections. Ultimately, the key is that it’s possible to bypass the permission of the platform and enact an application layer beyond what’s offered by the platform. By application layer, I’m referring to the additional manipulations, such as user selection and treatment, that we can introduce beyond the the existing functionalities (e.g., sending messages) and data state (e.g., user connections) of the platform.
This method naturally has its limitations. Compared to experiments conducted within a platform, we can not access the log data, which would offer much more detailed and accurate insights. I also want to highlight the potential ethical issues when conducting such academic experiments. For instance, we can not send nasty comments or anything that might lead to extreme reactions of users.
Is it OK to conduct social experiments for research purposes?