Welcome to Erasmus Studio. My name is Evert Stamhuis and I'm a senior fellow in the Jean Monnet Center of Excellence on Digital Governance. Today I have a conversation with Anna. Welcome Anna. Thank you. Anna Keuning is already a graduate in two master's programs, and now she's a researcher that has a shared interest with me on the topic of today, which is Chains and Networks in Public Administration. This is special aspect of the digitalization of public administration. Anna, what have you investigated? What do we see? Well, I think what we are seeing is that public bodies increasingly engage in partnerships together, and we call this chain networks. What they do, is they set up a network together in which they share data or decision with each other, and they use this data or these administrative decisions for their own goals. We call this chain networks or chain informatization. I would say there are some defining elements to this chain operation. One of the key defining elements is cooperation as I mentioned, but the second very important element is also automation. What we see is that a lot of legislation is transformed into a set of algorithms, and these algorithms are then fed with the personal data and these algorithms generate outcomes for the public bodies. What we see is that, a lot of the algorithms have taken over some of the tasks that the civil servants would do traditionally. But you already talk about, let's say the outcome of a chain on networks. What I've seen myself is very detailed cooperation mechanisms, for example, in public administration regarding taxes and Social Security. On the slide we show an image of a very complex cooperation mechanism inside salaries for ordinary people, but also relating to the sharing of those, let's say financial information throughout a chain with a couple of partners. A couple of partners looks well quite uncomplicated. But if you see in the image, you can see that it ends up on innumerous stations and it's rather complex. This is just one example relating to tax and Social Security. But there is also a chain regarding, let's say, criminal enforcement and those sharing mechanisms. Not just only the outcomes, but the sharing mechanisms are already in place in the Dutch public administration and we know that in other administrations there are also developments in the same direction. Exactly. But you were talking about the outcomes. What are the specific issues regarding the outcomes then of such a chain cooperation or in network cooperation? Well, I think what's interesting about the outcomes is that all the decisions build upon each other, so the outcomes are co-dependent of each other within the chain. That's interesting for the public administration because in Dutch Administrative Law, administrative decisions are viewed as independent. You can see that this can cause friction in the public administration. Is this just a temporary situation or will we move? What will the reaction be? My idea is that this is just the beginning. Yeah, I would say is the beginning of a development, because what we see now is that, in Dutch Administrative Law for example, chain decisions are still viewed as traditional. But I would say and I hope there will be a development where we can view these decisions as what they are in practice, interconnected, highly automated, so that responsibility and accountability is secured and the individuals who are the addresses of the decisions will be protected. I'm sure we will come back to that later on. Yeah, we will. But for the moment, I also want to bring into this conversation that, let's say that the bottom line of the sharing is indeed also very much pushed forward on the national level as well as on the European level. We now talk about open data spaces and sharing of data for public administration. The whole idea is that you share, you give access, that you introduce multiple uses of data. Yeah, so even there I could see, let's say, not so much on the level of the decision making yet, but on the level of the data that feeds into the decision making change. There is already huge push forward right now to more sharing, more common use et cetera. It's considered to be at least in the European discourse, considered to be beneficial to innovation, to economic well-being to growth. At least one thing extra should be mentioned to here is that, if in the European context, artificial intelligence needs to be developed further, we need this pooling or sharing of data because usually the databases themselves are too small to really work for loaning algorithms. This is also pushing in way of moving towards more deployment of artificial intelligence as a result of which we see data being shared more and more across public administrations, public bodies, but also between public bodies and private actors like industries. Yeah, and I do wonder how these trends of share ability and open data spaces, for instance, connect to some of the GDPR principles like purpose limitation. I hope we're not coming to the legal. Yeah. Exactly. An ethical drawback of this. Exactly. What do you see here in terms of the legal worries or problems that you expect? Some of my concerns relate to gaps in responsibility. What I have seen from my research is the attitude of public bodies acting like they are independent, but in practice they co-dependent on each other because they act within the chain. But the problem is that they do not take shared responsibility, and what's happening is that public bodies will point fingers at each other when it comes to taking the responsibility. I would say the concern is gaps in responsibility. I think the legal system does not create sufficient incentives now, to solve this issue, because of this idea that public decisions are independent indeed. The individual does not really have the tools to fight the fight I would say. He can appeal to one decision of the chain, but he cannot appeal to the combination of decisions, so you're never sure if the error in the chain will be fixed by that one outlet decision. What you're basically saying is there is a huge mismatch between the legal arrangements. Exactly. What is the practice on the ground? Which is really worrisome, particularly for public administration, if I may add. Because in public administration, responsibility does not only have the feature of being accountable for wrong doings but also being accountable to democratically elected councils or parliaments, or on the national or the local level, and also to the wider public. If you say that, it is for the individual harder to find out where the wrong has actually been done and who is responsible for it, that's even more worrisome in the public domain, where you expect that there is transparency towards democratically elected bodies, but also towards the wider public. Of course, in public administration we expect a legal ethical duty of public administrators, that there is the opportunity for individuals to have their individual circumstances being taken into account. Well, automation in itself is already a reduction, because algorithm think in black and white, and yes and no in ones and zeroes. Even the less advanced algorithms that are now in practice, we see that individual circumstances are rather hard to find a place into these mechanisms. As a result of which, the individual that wants to appeal to remedy against this specific decision, needs to bring them up themselves. Exactly. If he doesn't know who to talk to, and who to appeal to. What to do, yeah. What to do, that's quite worrisome. It is. This is already true for one decision-maker, that has an automated decision system, but it's even more worrisome for the phenomenon that we focus on today which is, the chain on network, because if you have this reduction, already at node A. Yeah. If the data travels throughout a chain, towards node B, C or N. Yeah. Then the combination of reductions is even worse. Exactly. We already know from research of others that the hardship clause is so specific taking into account specific individual circumstances in Social Security, are quite a problem in automated systems. Of course, we know, what machines are good at the moment is calculations, with numbers. Numerical decisions like income tax for individuals, or speeding tickets where on the motorways you speed at a certain number of kilometers above the limit. In that sense you say automated decisions at least in the Dutch jurisdiction, are fairly well accepted because this is what machines are good at, and the appeal rate is rather low. Not that high. No, no, no. But if it's non-numerical, well, some say that artificial intelligence learning algorithms will solve that problem but for me that's still up in the air. Those reductions in the combination of nodes that add to a certain reduction already their own reduction, that would be a serious worry, with regard to taking into account personal circumstances. Yeah [inaudible] Then, well, if it's true what you say, that it's even harder to remedy against the system, then that's worry number three. Yeah. If you do not know who to appeal to, or who is really responsible for wrong doing in the system. Maybe you can appeal to one outlet decision, but that won't affect the other decisions. This is what we really need to understand. There is this outlet. Yeah. Which is usually, let's say, the point of entry for a remedy. But you don't know whether this outlet is responsible It's sufficient to solve the issue, but because you can imagine if a certain error flows through a chain, it can cause problems at multiple nodes. But if you can only appeal to the outlet node, it won't solve your issue completely. That's the main issue, I would say. This is on the individual level and we know that in the systematic thinking, they call this the outliers and they say you don't need to focus on the outliers, but for us as lawyers, this is totally irrelevant qualification, because this is what the law is all about. This is to provide compensation for individual wrongs as a counterbalance against this systematization of decision-making processes. Towards an individual, it's even an outrageous qualification. [LAUGHTER] If someone says, yeah you'll have a problem, but you are an outlier in our system. It's not important, yeah. It's very harsh. Interesting by the way, you often see if responsible politicians defend these situations in the media, they often say this indirectly by saying, yeah, but a lot is going very well. Yeah. Exactly. A lot is efficient and cheap and customer-friendly, but the outliers are ignored and just shoved away, you could say. Yeah. Also on the systematic level, the security is also an issue because we know from research regarding Chicco Jewelry, quite old research is that the number of transactions is directly or correlate to the number of security risks or security raised in the terms of mistakes, but also in terms of data leaks and all those issues. We will not pay attention to that in this conversation. But there is also this security issue that comes up. It's an additional problem. This is all it say, worry, worry, worry or are there some solutions? Yeah, there are solutions, I would say. I think the GDPR has made a good start with finding the solution and thinking about liability, responsibility in different terms than the public administration is doing now. The GDPR basically provides a joint liability scheme for controllers and processors that engage in the same processing activities. This is very good for the applicant trying to claim damages because he does not have to find out which one of the actors in the chain, for instance, has ultimately cost the damages. One of the actors gets pointed out. He has to pay the full amount of damages and then after that he can take redress at his own actors. Yeah. Because interestingly this might be a solution because where they're you see the chain for the network as a whole. Because interestingly the point is that actors administered bodies usually talk about synergetic effects. We cooperate and then we create synergy. 1 plus 1 [OVERLAPPING] equals 3, and everyone is happy. It sounds great. Yeah. But for the individual, it's usually inverted synergy. The other way round where you can see that 1 plus 1 equals 1 at the maximum and maybe even approach is 1 plus 1 equals 0 for [OVERLAPPING] the individual. What you now describe as viewing in the legal arrangement, the network as a whole [OVERLAPPING] towards the individual, that might be a resolution. I think so. What's great about this game is that even if actors decide to internally allocate their own responsibilities, this arrangement cannot be held against the applicant, so that's some extra protection. He will get that. This arrangement I recognize because this is also what we now see in the European Parliament resolution of October this year, 2020, where this idea of one point of entry for the people who have suffered harm, is also introduced for ethical deployment of AI and robotics, and you see that there is the same basic arrangement for the the internal composition of a network or of a complex system cannot be held against an individual who has suffered harm and he or she is offered a single point of entry. We have given an image of that which will show [OVERLAPPING] how this will work, where you see that the single point of entry is given to this specific individual or a corporation or a person, and then inside the network, the components should sort it out between each other. They will have to fix it, how they will redress their members. But this is then the solution to the problem [OVERLAPPING] or there are some drawbacks? Yeah. There are some drawbacks, I would say, because claiming damages can be great. But it's not always the main interest of applicants, because as we spoke about, errors can cause ongoing issues for future decisions, for example. If you do not correct these errors, future decisions can be contaminated by these errors. For example, if you will get ongoing administrative fines that are incorrect, I think you will go crazy at one moment and you just want them to stop. Your main interest will be to heal the chain from this error. But a joint correction scheme is not offered by the GDPR, there is a right to correction, but this is aimed at the individual controller and it's also not quite certain how this is enforceable at the moment. Now, two situations come to my mind, we have a famous case in the Dutch situation of identity theft- Exactly -where someone had to go at terrible length in order to have years and years of proceeding against the government authorities in order to get in all corners or in all nooks and crannies of the system, the correction that he was entitled to actually. The second instance that comes to my mind is the situation where, let's say, decision making change are combined with fraud surveillance technology, because we know that fraud surveillance technology can identify on the basis of data combinations in a model, can identify fraud risks. But if the fraud risk is allocated to a specific person and then fed back into the decision making chain, you have a poisonous mix of when there is this mistake, and then the decision making chain has already, let's say, proceeded this wrong information into certain decisions. Then of course, it's even harder to find out where the correction needs to be taken and how it can be taken, and how you can be sure that you are indeed relabeled as not a fraud, but an ordinary citizen. Exactly, because the whole chain is contaminated already by this information. You're not sure which decision will be influenced by it maybe. Yes. We have seen these outrageous examples here in the Dutch jurisdiction that I would say are a warning to all other jurisdictions that if you combine, let's say, a risk approach in fraud detection with a security approach in decision making change, then the risk dominates and it can be really detrimental to the individual who is then unjustly dealt with by public administration. I would say a second problem is that the GDPR is limited because it's great for protecting personal data but chain errors do not have to be personal data issues, it can be caused by different issues as well. If a certain law is incorrectly implemented, for example, an individual won't be able to rely on GDPR so that's a second issue I would say. Yes, and I usually learn also when I talk to computer specialists they say, "Yeah, but do you have a focus on this more serious risk but do not forget typing errors." Typing errors, just simple mistakes. The interpretation also plays a part where individual corporations or individual persons have to give their data to the system where they may not understand the questions, because taxable income is a legal concept, which is clear to our colleagues from the fiscal department. Exactly, but not to everyone. Not to everyone. No. Do we have an example of cases like that where? Let me think. Yes, I think with tax incomes, as you say, that's something in the calculation goes wrong. It's just a calculation error. It has nothing to do with personal data. It's just the sum of money is incorrect, for example, and then gets fed into another public body and then the public body says, well, you do not have the right to a student loan, for example. Okay, so this issue of automated decision is also related to student loans? Yes, exactly. It's all kind of concrete things, I would say. Students that watch this video also need to worry. Watch out. Okay, good. A lot is going on. Yeah. First of all, I think we can come to the conclusion that we need to study, we lawyers, we need to study this reality and improve the legal arrangements, more specifically, for situations like this, and also be a critical follower, a critical observer of those practices. That's clear. Do we have other topics for research that we need to focus on? Yeah, I think the technical solutions, they would be very interesting. I was wondering what you thought about the opportunities for technical solutions in relation to research. Well, what I see in this sharing practice is, let's say, rather underdeveloped in the public administration, but also in health care and in industry that fed from a totally different source, that is from data sovereignty as a source, there is a growing awareness of the risks of, let's say, sharing, so data traveling from one to another. What you now see is a switch from sharing data, providing databases to other users, to providing access of the user to your data. The data remain inside your area of control and access it and provide it. At least what I learned for the moment is that it's obviously more easy to come to these technical solutions for the problem of too widely sharing or sharing the wrong data. The data does not really flow to a change then, for example. No, it's just the excess that flows through the chain, and this is then combined with what is called privacy enhancing technologies, of which there are some experiments, then also in Dutch public administration, in order to find out that you do not have to give access to your full database. But first of all, you say, you ask if totally encrypted query to the database and there comes a hit no hit outcome and only in the cases of the hits, the day's data then are shared, encrypted, et cetera. The combination of technologies, encryption, privacy enhancing technologies, hit no hit systems leads then to at least the reduction of sharing too many data et cetera. But still, a technical solution to the contamination that you've talked about. I haven't seen yet of course, I would be interested in it like is something that could positively infect the whole system to resolve the error. I think that's something to look for. But what would you expect? If we promote this as, let's say, we're going to argue that the regulation, the public policy solution should be this is the target that you have to meet. Would that then stifle innovation or would it? I'm not sure, I wouldn't go there. I'm not sure if that would be the case. If that stifles innovation. I think it could also maybe stimulate innovation. We're not sure I think. I think we should do some research first to conclude. To have conclusions on that. All right. I think that's an interesting topic as well; how does innovation relate to regulation? Yeah, we know of course, there are a lot of, let's say, commonalities that are shared even in academic papers. Yeah, exactly. My experience so far is if you dig deeper, there is not so much empirical basis for either position, either the stimulation or because there are examples of the both, either stimulation and stifling effects, but in this specific example, it might be interesting to promote that the regulation should require, if you want to proceed decisions into consortia of chain or network-like structures, you need to have a remedy mechanism in place for the entirety. Exactly. Not just providing compensation for harm, but also to remedy any mistakes. Then it may help. Stimulate innovation. May help public authorities to say, ''we're going to buy only systems that has this safeguard as well.'' Then for example, by way of these procurement conditions, you could stimulate the development of that. Maybe we could sell it like that in a positive sense. Okay. Well, thanks very much for your interesting input. Thank you. Yes, thank you as well. Thank you very much.