TILTing Perspectives 2017 report (1): The healthcare session

* This report is arranged in a chronological order.

As the Dutch saying goes, “Waar is dat feestje? Hier is dat feestje!”. Held at Tilburg University (Netherlands) a few weeks ago, the 5th bi-annual TILTing Perspectives conference was indeed a feast of thinking. An exceptional large number of researchers, practitioners, policy makers, and civil society at the intersection of law, regulation, technology, and society were brought together to explore the answers to the contemporary challenges facing technological innovation. 

This 3-day, well-organized conference features five large tracks: Privacy, Health, Intellectual property (IP), Data Science, and PLSC Europe, which were staged in the form of plenary sessions, parallel sessions, and panel discussions with invited speakers, as well as presentations from respondents to the call for papers. Attendees therefore could customize their own schedule. Apart from the IP track, based on personal preference, this Kat also attended several sessions on "other topics" – despite the strict role definition of “IP-Kat” – nowadays everything seems to be all connected anyway. 

The Healthcare Session 

The ongoing technical and social developments of ICT and consumer’s/patience’s ICT apps have not only reshaped the structure and organization of care delivery, but also increasingly challenged the protective legal mechanisms regarding healthcare and the concomitant principle of confidentiality. Meanwhile, a multitude of ethical questions have been raised...

1. Robot Doctors and Algorithm Therapists – The Limits of Automated Decision-making in Healthcare 

Sebastian J. Golla’s presentation emerged from an interdisciplinary research cluster “BeMobil – Regain Mobility and Motivity” in which he took part. The BeMobil project is funded by the German Ministry of Education and Research, and aims to develop and improve rehabilitation technologies and therapeutic systems, while focusing on intelligent system in telecare settings. 

Golla examined autonomous assistant systems in healthcare, questioning their decision-making capacity from a legal perspective: to what extent is it legal that autonomous assistant systems replace the decisions of doctors and therapists? He discussed how new technologies affect the right not to be subject to automated decisions regulated in Art. 22 General Data Protection Regulation. He also tackled whether technologies are compatible with the duties of practitioners as regulated in Medical Associations’ codes of conduct: technologies should be applied in a way that could help fulfilling the professional obligations. 

He concluded that medical/therapeutic assistance systems are becoming more important and can be very valuable, in terms of saving costs, time and improving medical supports. This also means that physicians will have to learn to deal with embedded (clinical) environments. They will need to be able to understand and explain the used assistance systems, review the assistance systems’ suggestions: the role of doctors might accordingly change significantly. 

2. Robotic Cognitive Therapeutic researches and the law 

Eduard Fosch Villaronga opened his presentation in an optimistic tone: “A robot can adapt easily to each individual’s needs; robots behaviour is predictable and repetitive; robots are very engaging”, shortly after which he identified the fearful risks involved: decrease of human-human interaction, safety issues, compliance confusion between robot toy and medical device, and - of course - privacy issues (which involve external sensors, robots, experts, AI and cloud computing). 

Apart from the current laws (eg the EU Civil Law Rules on Robotics), he stressed that unleashing the full potential of these technologies, while protecting the interests of users, requires an appropriate framework. 

To better regulate the emerging therapeutic robot technologies, he suggested several principles as the basis for a future framework, namely: principle of policy learning (e.g. evidence-based policies), principle of individualization (e.g. individualization of care), principle of no assumption (e.g. end-user centric design, value sensitive design), principle of non-isolation (promotion of human-human interaction), and principle of accessibility (e.g. low cost robot, cheap robot-as-a-service). 

3. Expert systems (ES) and medical malpractice: reframing the notion of negligence

“Expert systems promise to radically change the way medical consulting is performed. Medical doctors are supposed to stay in charge and hence be liable for negative consequences suffered by the patient.” – Andrea Bertolini presented his very interesting research on the ES and traditional medical practices by taking the IBM-WATSON as a example.

Starting from “what the ES is”, the speaker addressed three distinct legal issues:
- Will ES impact the assessment of medical malpractice? Is it like any other tool? 
- Will ES influence the behaviour of medical practitioners? The new frontier of defensive medicine? 
- Will ES influence the apportionment of liability? A new player involved?

The speaker explored the liability rules (negligence, product liability) in the context of EU legal systems (i.e. the Italian, French and German systems) and US tort law. 

Should doctors conform to what the WATSON says? “Most likely so”, Andrea answered, “Yet, the true question is, is it really bad? If WATSON delivers the most accurate information and data available in the given point of time, is it really bad that we force doctors to conform to that information?” Apparently, accessing to the ES do require certain level of knowledge, based on which, he explored some ultimate alternative solutions (e.g. the enterprise liability approach).
TILTing Perspectives 2017 report (1): The healthcare session TILTing Perspectives 2017 report (1): The healthcare session Reviewed by Tian Lu on Tuesday, June 06, 2017 Rating: 5

No comments:

All comments must be moderated by a member of the IPKat team before they appear on the blog. Comments will not be allowed if the contravene the IPKat policy that readers' comments should not be obscene or defamatory; they should not consist of ad hominem attacks on members of the blog team or other comment-posters and they should make a constructive contribution to the discussion of the post on which they purport to comment.

It is also the IPKat policy that comments should not be made completely anonymously, and users should use a consistent name or pseudonym (which should not itself be defamatory or obscene, or that of another real person), either in the "identity" field, or at the beginning of the comment. Current practice is to, however, allow a limited number of comments that contravene this policy, provided that the comment has a high degree of relevance and the comment chain does not become too difficult to follow.

Learn more here: http://ipkitten.blogspot.com/p/want-to-complain.html

Powered by Blogger.