We are already in week 4 of the semester, and I’ve been planning to write (what was originally planned to be) a quick update on some recent developments on my projects and outputs this summer. Instead, this quarterly round up has become somewhat longer than intended, so settle in and read on (there are some bonny pictures at least)!
Emotional AI: As part of our ongoing ESRC project on UK-Japan Cross Cultural Conversation on Emotional AI with Andrew McStay, Vian Bakir and Peter Mantello, we spent a few weeks in Japan over the summer running workshops at Ritsumeikan APU’s Tokyo campus in July and then in London at the Digital Catapult in early September.
This was a fantastic trip where we worked with and met some brilliant new collaborators (see the PI’s own blog on this here.)
We had a wide diversity of participants from a range of disciplinary backgrounds, including: anthropology, new media studies, philosophy, computing, law, art, literature, journalism and criminology, to name but a few. This led to a range of opinions and insights on the emergence of Emotional AI in Japan and UK.
Across the workshops, our participants came from: New School for Social Research in New York, Ritsumeikan Asia Pacific University (APU), Bangor University, University of Edinburgh, Northumbria University, University of Cambridge, Keele University, Kyoto University, Freie Universität Berlin, Meiji University, University of Sheffield, Rikko University, University of British Columbia Chapel Hill University, Chuo University, Japanese Ministry of Defence, Digital Catapult, Sensum, Nvidia, Sensing Feeling, Centre for Data Ethics and Innovation, Privacy International, Dyson School of Design Engineering, Internet of Things Privacy Forum, Coventry University, ITD-GBS Tokyo, Doshisha University UK Cabinet Office, Independent artists.
The first workshop considered the potential of emotional AI, exploring social benefits and harms. We considered the technologies and their commercial applications in Japan and the UK. We also examined how citizens might feel about them, why they would feel this way and what laws and governance that guide these technologies are aiming to do in order to enable citizens to live well with emotional AI. With the second workshop, we explored the deployment of emotional AI in a range of security and law enforcement contexts, particularly with predictive policing and visual surveillance. We also examined recent trends around voice and facial recognition technologies, particularly at borders and in public spaces; the role of smart bots in manipulating and triggering user emotions in social media and their use in computational propaganda shaping civic discourse. With the third workshop, we examined our preliminary analysis from the Japanese workshops and considered future commercial and research agendas (following this one, we also got a tour of the new VR dev and 5G infrastructure at the Catapult!)
After the second Japan workshop, we travelled down on the Shinkansen ( 😀 ) to APU’s Beppu Campus where we were hosted by Peter for week. We presented some of our research to the staff and students there, for example some of my work on the future of regulating smart cities. We were also frantically working on our bigger/ longer follow on bid to continue this collaboration (it has been submitted – so fingers crossed!).
One of my jobs in the project was to bring together the project report that assimilates our cross cultural discussions. We presented this back to delegates in the UK at the Catapult, and I think we have produced a really interesting report on the topic. It is now available here and some of the high level insights are copied below. The report goes into these in much greater depth.
Related to this work, Andrew McStay and myself had a paper recently published in First Monday on our work around governing emotional AI, where we anticipate data protection implications of the turn away from ‘basic emotions’ models towards more contextual/appraisal based approaches. This looks in particular at deployments in public spaces, particularly how emotional AI may come to be a layer within smart city infrastructure, where we unpack the interactional, ethical and legal questions stemming from this.
DADA: It has also been productive summer with the Defence Against Dark Artefacts project too. Horizon PhD student Stanislaw Piasecki and myself headed to EUROCRIM 2019 which ran in Ghent from 18-21th Sept to present our paper, written with Derek McAuley, titled ‘Defence Against the Dark Artefacts: An Analysis of Assumptions Underpinning Smart Home Cybersecurity Standards’. This work built on a 3 month project at the Horizon CDT, and this paper touches on a variety of areas from criminology, law and computer science.
There is a full preprint paper available now on SSRN, whilst it undergoes review. The paper questions the way standards (which are in part meant to be aspirational) assume IoT devices in smart homes need to be built using a cloud based, centralised architecture. It instead considers the possibilities of edge based storage and analytics, as the Databox enables, and questions how these shifts might assist in creating more secure smart homes. In particular, we consider the criminological concept of routine activity theory, examining how this helps us to better understand human dimensions of cybercrime in the home. We were interested in scope for redesign of device architectures, manage internal/external situational risks, use of IoT cybersecurity standards, and understanding of interactions from RAT might help to address the lack of capable guardians, reduce suitability of targets and pose challenges for the likely offender. Through a range of case studies, we unpack how these elements might align and support increased security in smart homes.
Stan also presented at a workshop, building on this paper at the Transforming Privacy Law into Practice event in Oxford on 9-10 Sept 2019, examining how cybersecurity standards in smart homes might impact those living with dementia (bringing this work closer to his area of PhD research).
Similarly, in another DADA paper (based on work presented at BILETA 2019), Jiahong Chen, Lilian Edwards, Derek McAuley and myself examine data processing responsibilities in smart homes. There is particular attention to how recent case law is recalibrating notions of joint controllership and the household exemption in the EU General Data Protection Regulation. We are particularly interested in charting the apparent extension of DP law into the home, and what the implications of this are from the perspective of data subjects, domestic data controllers and developers creating privacy enhancing technologies. Furthermore, this work has implications for the DADA project too, so watch this space (it is currently under review and preprint not up yet).
Another data protection paper that is now publicly available is with Andy Crabtree and Jiahong Chen titled ‘Right to an Explanation Considered Harmful’. It is now up as a Working Paper on SSRN and available as a PDF . This paper explores how ‘lay and professional reasoning has it that newly introduced data protection regulation in Europe – GDPR – mandates a ‘right to an explanation’. This has been read as requiring that the machine learning (ML) community build ‘explainable machines’ to enable legal compliance. In reviewing relevant accountability requirements of GDPR and measures developed within the ML community to enable human interpretation of ML models, we argue that this reading should be considered harmful as it creates unrealistic expectations for the ML community and society at large. GDPR does not require that machines provide explanations, but that data controllers – i.e., human beings – do. We consider the implications of this requirement for the ‘explainable machines’ agenda.’
Moral-IT: There was recent small project bid success for further work on Cardographer, in order to further develop this tool. It will be exciting to see how to further integrate with the Moral-IT card deck to provide us further data led insights into how the cards relate to each other. On that point, we are busily working on data analysis and publications stemming from this project, and there will be a project blog on this soon too. I’ll be presenting some findings on our data analysis at Edinburgh in some research seminars in the coming weeks. Also, Peter and myself will be using the cards in a Responsible Research and Innovation training session for Horizon PhD students in mid October. Somewhat relatedly, I also headed over to the Legal Design Summit in Helsinki in early September to hear about this emergent field. I went with some curiosity as to how the cards and my wider work in HCI + Law might intersect with the goals of legal design. It focuses on design thinking, service design and making law more accessible e.g. information design for contracts. Whilst the concept is still at a (relatively) early stage of becoming mainstream, it is an area I’ll keep an eye on (particularly for additional readings for my newly planned HCI & Regulation course!)
Memory Machine: A paper by Dominic Price, Rachel Jacobs, Elvira Perez Vallejos, Dimitri Darzentas, Neil Chadborn, Sarah Martindale, Hazel Robbins and myself was presented at Designing Interactive Systems 2019 in San Diego. The paper entitled MeMa: Designing the Memory Machine’ is now available in the proceedings of the conference and it documents: ‘the Memory Machine project which aims to develop a device to capture people’s memories to create a blend of personal and factual data that builds identities, and contextualizes personal recollections. The Memory Machine has been guided by co-production and user-centred design principles to ensure users’ input has a critical role in the development of the technology. Through a series of creative workshops, we facilitated participants to discuss and represent their perceptions of memory making and recollection, towards the design of the Memory Machine. This paper investigates how a creative, participatory process enabled technical topics to be explored together, as well as enabling the participants to address more challenging issues of memory; such as painful memories, memory loss, and memories at end-of-life, with a particular focus on dementia, to inform the future design of the Memory Machine.’ Also, there is a very nice poster with hand drawn sketches here, which gives an overview of the project.
Robotics and IoT: The final version of our Responsible Robot(icists) paper on ethical dimensions of robots in the home was also published in May in the Journal of Information, Communication and Ethics with Horizon CDT students Natalie Leesakul and Dominic Reedman Flint (sadly not OA, but preprint is still here).
I was also invited to an exciting workshop on the Internet of Things and Surveillance workshop organised by Prof Lilian Edwards at Newcastle University in early September. This one day event brought together a range of experts from technical, legal, ethical and social perspectives to debate the IoT present and emerging futures. It was a good catch-up with old friends and new. It was also hosted in the Urban Sciences Building at Newcastle, a smart building with many sensors (CO2, motion, occupancy etc)and data streams are (I believe) publicly available for research purposes (there is a detailed PIA available!)… I presented my work on human building interaction and governance of smart buildings (although being right at the end of the day, it was a bit a whistlestop tour…cramming a 40 minute talk into 15 mins!)
Also, it has been a long time in the making, but I sent off my final final edits for my White Noise from the White Goods/Privacy by Design chapter for the Gikii book (formally titled Future Law) with eds Lilian Edwards, Burkhard Schafer and Edina Harbinja. I really quite like this chapter, which goes on a history of IoT from an HCI perspective, mixed in with many sci fi culture references and some empirical data on privacy by design from interviews conducted during my PhD.
Centre for Data, Culture and Society: I’m rather excited to be a core team member of a new centre established recently at Edinburgh. They are doing fantastic work on engaging social science and humanities researchers with data driven research practices, through a series of events, fikas and future data analysis/management training opportunities. There is also a shiny new website where you can keep track of developments and news.
Alongside this, there has been much peer reviewing (both journals, and more recently for funding councils, which has been an interesting experience looking from the other side!), grant writing, course development and even conducting my first viva as internal examiner (well done again Edinburgh’s Wenlong Li for a fantastic PhD on data portability).