Regulating AI
- Post by: Jungpil Hahn
- June 16, 2021
- No Comment
On June 9, I participated as a commentator at the Seminar on Law and Technology (SLATE) at the Center for Technology, Robotics, Artificial Intelligence & the Law (TRAIL) at NUS Faculty of Law. The topic of the seminar was “AI: What, When, and How to Regulate? With Lessons from Estonia“. (clickthrough for a report on the event from NUS Faculty of Law)
The seminar was largely based on two papers recently published in a special issue on Law and Technology of the the Singapore Academy of Law Journal — one by Professor Simon Chesterman (Dean of NUS Faculty of Law) who discussed the challenges in regulating high-speed algorithms as some of the existing law practices relied one practical obscurity or practical friction. One good example is competition law where price collusion is illegal but if crawlers and bots can retrieve competitors prices in real time and algorithms can effectively set prices based on this information ultimately leading to price collusion without explicit intent (e.g., executives from competing firms meet up in a smoke-filled ballroom to secretly collude).
The other paper was by Professors Katrin Nyman Metcalf and Tanel Kerikmäe from the Tallinn University of Technology in Estonia who shared their experiences in e-governance to highlight the challenges due to so much uncertainty with respect to technological developments. Specialized regulation will likely get outdated too quickly whereas generalized omnibus regulations will not be effective. An interesting issue raised was related to the jurisdiction of regulation given the global and borderless nature of AI and technology — strict regulation in one jurisdiction will likely move all of the innovation to another less strict jurisdiction and the country with the stricter regulation will likely lose competitive advantage.
The discussion focused on centered on several key question — should we regulate AI or other algorithmic technologies (yes, we probably should); if so, then what is the scope of regulation (hmm.. this is tricky), when do we start to regulate (again quite tricky as we don’t want to stifle innovation), and how to regulate (again, quite tricky as an omnibus approach to AI regulation probably won’t work).
As the token “non-law” person at the seminar, my commentary focused on how technologies would think about these regulation issues. Technologists are fundamentally innovators who create things just because they can and it’s cool. Many, if not most, of the problems and challenges articulated during the talks were due to the fact that there are too many unanticipated consequences that are typically emergent, and as a result cannot be foreseen at the algorithm design and testing phases. Are technologists ready for an added responsibility of ethics and regulation put on them? These seem to be normative factors that may be difficult to explicitly regulate or enforce. Should we be thinking about altering our software/systems development methodologies to incorporate explicit steps that attempt to uncover unintended emergent outcomes? I believe that this is a worthwhile pursuit.
I’ve been working on formulating such a methodology with Jonas Valbjørn Andersen (IT University of Copenhagen) and Christoph Müller-Bloch (also IT University of Copenhagen but soon to join ESSEC in France). Our work in understanding how design and behavioral features of blockchain network design influences the emergent centralization in conferment of validation authority in proof-of-stake (PoS) blockchains led us to think about different ways in which one might think about restructuring systems development methodologies and practices such that unintended consequences of coordination algorithms might be pre-emptively identified in an iterative manner. This work is still on-going and we hope to have a draft to share soon.