Database Working Group

23rd May 2019

At 2 p.m.:

CHAIR: Welcome everybody. We have a great agenda set up for the database today. And we will jump right in.

So, Sandra, if you want to come right up.

SANDRA BRAS: Hi, good afternoon. So, it's there. So, this is very quick actually so thank you for the tiny slot, my name is Sandra brass and I work at the RIPE NCC in the training department and I am just here to ask you if you could fill in this questionnaire that you can find on the slide so this is what we call the RIPE database Learning Questionnaire and this focuses on tasks that you can perform in the RIPE database and the results will help us change or adapt the learning content that we currently have in order make it more relevant to the things that you actually perform and do in the RIPE database.

So if you also want to see how these results will be linked to our new programme, the RIPE NCC Certified Professionals, I invite you to read the RIPE Labs article that we publish this week and there we also explain how we are going to do this and also that the RIPE database will be the first content that we are going to create the exam on.

So this is all on my side, and if you have the link, if you can just basically ‑‑ it literally takes you two or three minutes to fill in and this is especially for the people who do use the RIPE database.

If you have questions you can e‑mail me so we don't have to give time for questions right now. Thank you so much. And please, fill in the survey like now. This is the link and the RIPE Labs article too if you want to read more about what we are doing. Thank you.

WILLIAM SYLVESTER: Great, thank you so much. Everybody have the URL? Anybody need it? Ed.

EDWARD SHRYANE: Good afternoon, I should give everyone a couple of minutes to fill out the questionnaire. I don't mind waiting. My name is Edward Shryane and I am senior technical analyst and work on RIPE database.

This is the RIPE database team. So this update is kind of summary of their hard work over the last six months so thank you very much to them.

Since the last RIPE meeting, in Amsterdam, we have been mostly busy with the implementation of the 2017‑02 abuse‑c validation project, so the triangle you are familiar with is firmly in the same position as before.

Working Group and policies:

We have released three separate Whois releases as well. The first one was a point release for fixing full text search, we had some stability problems earlier this year and we patched that. And two major releases, Whois 1.93. We improved the e‑mail address syntax validation. This was raised by Rudiger at the last couple of RIPE meetings but it was a good opportunity to fix it, we tied it into the abuse‑c validation project as well. So if you can validate data going into the database, it makes it easier to feed it into that other policy implementation.

We combined that with a clean‑up as well. It took a while to do because it wasn't straightforward. Doing static validation of e‑mail addresses isn't easy and we tried to do it in a way that didn't break people's updates.

Another useful update was to not allow expired keys and to expire signed updates after an hour. I think that's very practical improvement and it was a reasonable thing to improve and it hopefully didn't cause too much disruption to anyone.

Also we are warning now if a resource Abuse‑C is identical to an organisation's Abuse‑C, often the not necessary to put a duplicate Abuse‑C reference on the resource, that warning will tell people that. And a lot of the Abuse‑C references on resources are identical to the organisations.

And then we did some ‑‑ laid some of the browned work for the Abuse‑C validation project itself as well. And the final release just before the, this RIPE meeting, was to add access control to full text search. That was a missing part of our kind of GDPR compliance. So now if you do full text search and find personal data we also account for that in the same way as on a regular Whois searches.

And as part of the Abuse‑C validation project, we are now adding a comment, if your Abuse‑C ‑‑ if we find ‑‑ if we are unable to validate your Abuse‑C address we will add a comment to Whois search results of that.

So I have quite a few slides, if there are any questions please ask them as I go along.

So since this last RIPE meeting the major project has been on 2017‑02. Since ‑‑ between October and December we did a trial run on 900 organisations. That was really useful for our implementation to ‑‑ to validate our implementation and to provide data for the board because the board needed to make a decision on the implementation. So they decided to notify users if the ‑‑ in Whois, if abuse contact information appears invalid and to direct them to the responsible LIR.

The roll‑out of this implementation we did in three phases. We did the rest of the LIR organisation Abuse‑C validation in February. We moved on to the resources in March and we, the final phase is the end user organisation, and resources Abuse‑C validation, that is ongoing, we should be finished by the end of the summer.

And we are currently ahead of schedule.

Whois outages. I think we should be open on where we went wrong and where we can learn a lesson and do a bit better. So we had three separate outages since the last RIPE meeting. One was on my resources, for about three hours we had a mismatch between our back end and front end fooling deployment and the my resources led ‑‑ it led to the my resources page refreshing so we caught that by the end of the day and fixed it.

Full text search, we had a problem with full text search where certain queries to full text search caused Whois to run out of memory and we have a monolithic application where everything runs together. So we decided to turn off full text search complete to improve the stability of the entire service and we did take our time to fix it properly and we deployed that tend of February to fix that properly.

And we are going to make a further improvements to that in future.

And the final thing was to NRTM, there was some disruption to the NRTM service due a single client swamping all of the connections to the service and the work around was to disable the client but we are putting in functional improvements to avoid that in the future.

So apologies if you were affected by any of the outages.

On the web application side, the improvements were mostly again around Abuse‑C validation. We made a couple of changes to the query page, to my resources we added a separate menu item so easier to find sponsored resources. We improved full text search and added rate limiting to restrict large queries and we added the comment to Abuse‑C validation.

Website analytics, this has been a topic of discussion recently, and we have taken the feedback on board and we have replaced the use of Google analytics on the website with an Open Source self hosted version called Matamo, it was called PI W I K in the past. So what do we use website analytics and why do we do it? There is two main reasons: We would like to find out hue visitor uses this site, so we can make improvements. And the second thing is to monitor performance as well, so for users we'd like to know how fast the page loads for them, if they have any ‑‑ so the performance, kind of performance, details of that. We are not interested in collecting personal data but we do need to know in aggregate how users are using the site.

So this self hosted solution is more anonymous so requests just go back to our own service. The client IP is anonymized on the /24 level. No other user data is stored session data only stored for 90 days and aggregated data across the whole service is stored for longer than that for historical analysis.

I think there was discussion on two separate numbered work items since the last RIPE meeting. The first was NWI‑8 regarding SSO authentication groups and the problem definition was published back in February and I recently published or proposed a solution for that. So, the scope ‑‑ the scope for that, I'd like to reduce the scope of the initial problem definition in the interests of getting a working implementation out there. So if the Working Group are agreeable to this, we can defer authentication groups until later and concentrate on the initial sync Ronisation between the non‑billing users in the portal and the maintainer and it turns out there is already maintainer for the organisation in the portal, the default maintainer, and this is set on most organisations already, so this is a mechanism we can use to keep non‑billing users in sync with the RIPE database.

So I'd like some feedback from the community and from the Working Group on whether this, we can move forwards with this and defer authentication groups until later.

Second thing was numbered work item 9, inband notification mechanism and a problem statement was to get up dates to the RIPE database pushed out to users. Regardless of membership status. And currently there is a near realtime mirroring service but it's member‑only and members have to also sign a separate agreement in order to use it. And there is currently a small group of people using that, like 60 separate clients.

So to open that up to the larger community, it's not going to be easy because of the restrictions around the agreement and also the existing protocol, it's a custom protocol and it won't be easy to extend, for example adding extra filtering so instead I would suggest that we completely replace it, and we use a more modern protocol like HTPS using WebSocket and JSON to provide the object in a kind of easily parsable format. This will allow us to replicate the current functionality and more easily add additional functionality like filtering or authentication.

So again, this is just one suggestion from me and if the community feel this is a good direction doing in, it would be nice to hear that. Otherwise, please propose an alternative solution definition.

Okay. What has happened since the NWI‑5 roll out last September, the out of region routing information:

So what has changed, there has been some progress since September, in particular there is 7% less route objects, 6,000 less ‑‑ in that, that is good news. I think the amount of data in there, the not going to resolve itself, so there are two separate proposals to do something about this, to clean up the amount of non‑authoritative data that is in that data source. One is the 2018‑06 proposal. Version 2 has just been published. The idea is that any route that has an invalid RPKI result will get cleaned up. Job reported on statistics there will be something like 33 would be cleaned up, so there will be initially a very small operation impact, potentially. But with every published ROA the NONAUTH source becomes cleaner. So quoting Job there. So that is still under ‑‑ in discussion.

The second thing, there is a number of work item from 2016 on AFRINIC IRR homing and it may warrant some more discussion. There hasn't been anything said about it since then but now the AFRINIC does have its own IRR service. Now may be a better time to do something about this, and it turns out there is a lot of AFRINIC data in the NONAUTH's source. There is an opportunity for a large clean‑up of that source by either moving all of the prefixes to their RIR or deleting duplicates between the NONAUTH source and AFRINIC IRR. So maybe some more discussion is warranted because there's a large potential gain to doing a clean‑up.

One kind of completely separate thing is, the Whois release process, and currently we go through a two‑week release candidate stage for all releases. And this is really good for feature changes where there are functional changes that users have an opportunity to test before they go into production. But the downside is that bug fixes must also wait for the next planned release and this can take months, we have seen with the last three months, there weren't any releases between October and May, and there were some useful bug fixes in there too.

So my question to the Working Group is, if we can use ‑‑ change the use of the release candidate environment for feature changes and allow us to put out bug fixes immediately this will allow us to deploy bug fixes faster and we would perform extensive testing as usual on all of our releases and this always:always be improved further. And that we would also notify the Working Group of all releases. I will also maybe raise this on the Working Group, the mailing list as well. Because this would be a really positive improvement, I think. But I will wait to see what the feedback is on that.

Authenticating references to objects. It's a problem that the RIPE NCC has had. Currently only references to organisations are protected by the MNT‑ref attribute but we could extend this to other object types so in case you are having problems references to your Abuse‑C role and technical contact and admin contact, zone contact, organisation maintainers, this could be a mechanism to protect the references to your objects. It's a compatible way of extending the exist approximating MNT‑ref mechanism.

Maybe that's something else I can suggest to the Working Group. Because it's a problem that the RIPE NCC has had and I'd like to know if anyone else has had this. Okay. If this is into the pressing issue for anybody, we can shelve the idea.

We are for, to improve data quality and to protect people's personal data, we do regular clean‑up of unreferenced objects after 90 days. There is an existing mechanism to deal with organisation maintainer roll objects and also pairs of maintainer and person or role pairs but this could be extended to other groups using the same object types but different relationships. Maintainer organisation pairs, organisation person/role pairs and maintainer and person/role organisation groups. The amount of objects aren't massive but they can include personal data and I think we should make more of an effort to clean them up, if there is no operational reason to having these group of objects in the data.

That's something I would like to extend. GDPR. We have made some improvements since the last RIPE meeting. We are now accounting for personal data returned in full text search. And we have changed the defaults on the query page on the web application, the website, to not return related objects by default and also to filter responses by default. So you will by accident now ‑‑ it's less likely to get personal data by accident in your queries. So it means it's also less likely for you to hit the daily limit and it's also likely for us to serve ‑ to serve personal data to users.

There are ‑‑ the remaining improvements that we need to do, our legal department presented at RIPE 76 around historical contact details because they may contain personal data, so we should not ‑‑ the recommendation is we should not return a historical contact details which is not in line with the purpose of the database or Data Protection legislation so what that means are two major changes: Firstly to not include personal data in historical queries, that includes ‑‑ that includes the notify e‑mail attributes and postal address, and the second thing is to not include personal and role references in historical queries. So if this is something you are currently doing and ‑‑ we are shortly going to change the database so this will no longer be possible.

Finally, last slide, what's coming up next? So we are going to continue those improvements for GDPR compliance. We are going to await feedback for NWH‑8. There is now knew solution definition but I would like to hear whether the implementation agrees with what people want. We are going to improve operationally as well and we are going to improve our RDAP service so it's more compatible with the with what the other RIRs are doing and matches closer to what the spec says we should be doing.

Okay that's it. Any questions on the operational update or anything else I raised?

AUDIENCE SPEAKER: I am the only one. Max from RIPE NCC. I have a couple of questions and a comment from remote participant. So Cynthia Maya speaking in personal capacity, her question is: Is there an A TA for the implementation of NWI‑8, and is there an update regarding potential open access to NRTM?

EDWARD SHRYANE: So, no, we have no kind of ETA for NWI‑8 as yet but given the simplified implementation plan, I think it's something we can deliver in the short‑term rather than the long‑term. Especially if we defer the authentication groups it means we can get a first version out there in a reasonable amount of time.

NWI‑9, yeah, it's a proposed solution definition, but it's something, I feel it would be easier to reimplement NRTM rather than opening up the current service, given the current service is quite ‑‑ it's designed around a small pool of clients and if we put effort into opening up this whole protocol, it's something ‑‑ it's effort that could better be spent on reimplementing it in a more modern standard compliant way. But it's a proposal and if there's feedback, strong feedback in another direction we can definitely reconsider.

AUDIENCE SPEAKER: Thank you. And a comment from Cynthia as well: Since the somewhat related to NWI‑9 for my case changing protocol is not really a solution to NRTM being extended and the software is already there. That's just a comment.

EDWARD SHRYANE: I will take that on board, yes, thanks.

MARCUS JACKSON: You mentioned that bug fixes, etc, were being blocked by the feature releases. What kind of ‑ would you be hoping to achieve for bug fixes if you could separate those strains?

EDWARD SHRYANE: Our internal pace is two weeks, so we make improvements every two weeks. Potentially, if we fix it would be two‑week window and get it into production. It all depends on the community and Working Group and there is a release candidate environment there for a very good reason. This would only move ahead if there is agreement from the community.

MARCUS JACKSON: Just curious.

JOB SNIJDERS: NTT Communications. I wanted to talk or ask I think the chairs mostly a little bit about process. Some context: Years ago, anybody had access to RIPE's NRTM service. Later on that was shut down because there was a perceived lack of interest but I think at the time that decision was made without understanding that there is a lot of aggregation so data is passed on from device to device, so we don't see all the users because we don't have direct TCP sessions to all the end users in this context.

Now, anyone can still obtain access to NRTM if they pay significant fee, I think it's €1,500 a year, and that's a bit weird because we want non‑RIPE networks to use the RIPE data to generate filters in order to facilitate and protect RIPE members. It was also raised at the time that NRTM exposes a lot of information that is not used for routing purposes. So I think it would be interesting if we make a slimmed down version of NRTM and only expose route objects, AS sets, route sets, but this is my dream. How do we get there? What is the process to open up a different version of NRTM for free to the global audience? Is this even the right Working Group?

WILLIAM SYLVESTER: I mean, I think the process right now would be to submit a working item and from there that would sponsor a discussion on the list. I think there is probably questions of what implications might there be for NCC services or otherwise, but I mean as far as it relates to the database and relates to displaying data within the database, I think that's in scope within the Working Group. I mean, but please, if anybody has other thoughts about this, you know, it's not just about the chair, it's about what the Working Group would like to do.

DANIEL KARRENBERG: RIPE NCC. What the chair said obviously is correct, and the other tip I would give you is maybe collect some people who have the current and RTM feed and would go to the slim one, for them to tell the NCC so because that might be enticing.

AUDIENCE SPEAKER: From the Amsterdam Internet Exchange. I am the person behind requesting work item 9, so as far as I understand, at minimum for our use case but I think it's definitely the case for other Internet Exchanges as well, I think we need to consume everything behind what an RTM offers, we are looking for something much more trimmed down and dynamic in nature, so I think, for example, that proposal that you offered earlier, which is what I understand is also behind the RIS live interface makes sense to us. I understand there is a lot of policy and other implications behind the whole thing, so of course we have to respect that as well. But for from our perspective, in trying to honour the wishes of our customers as best as possible, it would make sense to consume that sort of information in a much more lean and faster way.

JOB SNIJDERS: The proposed solution splits out into two directions, and I would say current NRTM actually is suitable for the narrow scope of disseminating some of this routing information. NRTM over https alliance with some efforts with idea 4, we need to do redo protocol and I would love to work with RIPE NCC staff that would be NRTM version 4, but it may be good to make this numbered work item smaller so that the milestones become achievable. So current NRTM, not ideal but it's what we have. I want a new version but we need to get some money and time to develop that.


EDWARD SHRYANE: Thank you very much for your feedback.


DENIS WALKER: Co‑chair of the Database Working Group. Another point Ed and I have talked about in the past which didn't come up is, we have thought about the possibility of reversing the default on any standard query to not return personal data. Currently we do a port query by default if you don't specify option you will get personal data whether you want it or not. The possibility would be to reverse the default and not to return personal data and perhaps use the long since redundant minus capital R flag to choose to return personal data. It's just an idea but we thought would he throw it into the community and see if there was any comments on that.

WILLIAM SYLVESTER: Thank you. Next up Denis Walker on personal data in the RIPE database.

DENIS WALKER: I brought this subject up at the last RIPE meeting on the amount of personal data that this database has. I wanted to give you an update on how much or how little we have managed to achieve in the last six months.

First of all, just an interesting point, the coming of age of the current version of the RIPE database. This version kind of went live in April 2001. Now, I know this version has ‑‑ had a parent written in Perl using files instead of a database, it had a grandparent which was probably a notepad in Daniel's back pocket. But this version has been going since 2001. That makes the current version of the RIPE database 18 years old. So in this part of the world at least it's now an adult. And perhaps it's now time to have a mature conversation about the future of this database. Now, I know that quite a few of you here were in that meeting on Monday evening about the bigger picture. Now, I just wanted to make clear that I wrote this presentation long before we had that BoF on Monday evening, even though a lot of what I am actually going to say backs up and reinforces and agrees with a lot of what was said on Monday night. But I did write it first.

So the database content, we still have over 2 million personal data sets in this database. That means basically we have personal information on more than 2 million people sitting in this database. Very few, if any, of that can actually be justified. The purpose in the terms and conditions doesn't really justify personal data ‑‑ it justifies contacts but not necessarily personal data. And since the last RIPE meeting you have actually managed to delete 130,000 personal objects. Unfortunately, you have also created another 105,000 personal objects, which is roughly 500 a day, so during this conference, another 2500 people have been sucked into this database. And that's something we really do seriously need to think about.

Also the data quality, again at RIPE 77 and again on Monday evening it was mentioned that this database, the quality of it isn't quite up to the standard we would like. This isn't just about the mechanics of the database or the technical aspects of the database, we need a mindset shift in the people who actually enter and maintain this data. For the last 20 years it's just been normal to enter personal details of contacts because that's what was always done 20 years ago, nobody really cared about privacy and personal data issues but we live in a different world now, a completely different environment. These things actually do matter now.

To assist with this mindset shift, we might actually need to make some structural changes to the database because, as things are now, personal data is just splattered all over the database. There is endless repetitive links to the same personal data, where you are forced to put all those links in. There is mandatory attributes that demand contacts which are generally entered as personal data. So, if we are going to have the mindset shift, we also need to do the structural changes to actually make it easy for you to manage this database, to do what it needs to do without actually entering so much personal data, if any at all.

A couple of cliches come to mind. If it ain't broken, don't fix it. Well, this database is broken right now, and it must be fixed. If you are going to do a job, do it well. We often do short‑term fixes but the trouble with the short‑term quick fix in a couple of years' time we are back to where we are now. This needs to be fixed for the longer term, along the lines of what we said on Monday.

Now, James, Elvis and I started to have a look at this on personal data after the last RIPE meeting. We have identified a list of issues that need to be looked at regarding contacts and contact data. But what contact information is needed, by whom, for what purpose, how, when, where, should this data be stored and accessed. These are all fairly fundamental questions about this database.

We quickly ruled out the straight swap of person and role objects. Like me, a lot of you are engineers. If I said to you the answer to this problem is we must change personal objects for role objects, I know what a lot of you are going to do and will end up with more than 2 million role objects in the database containing personal data. Unfortunately the two objects are virtually interchangeable so that's not the answer.

Again, as was said on Monday, we actually do need doing back to basics and ask some really fundamental questions now about this database. It's not just a question of asking questions, we need to answer those questions and that seems to be the difficult bit. That's why we haven't made much progress in the last few months, because getting answers to some of these questions is certainly not easy.

We started looking at what the policy requirements are for contacts. I mean, this is a policy‑driven industry. The policies require contacts, either we must justify those needs or change the policies. We did put a question to the Address Policy Working Group four times, and this is the issue about struggling to get answers from people. We got virtually no response whatsoever to the questions. I know personal data and privacy are not very interesting subjects. There has been a lot happening on the Anti‑Abuse Working Group recently, it does seem to have attracted some attention from people, and maybe these questions just slipped by and nobody even noticed. But there are potentially serious consequences if you ignore the issue of privacy and personal data. So, as I keep saying, we have to address this thing.

Now, the question we asked, I mean I don't want doing into too much detail today about, I just want to use it as an example, it was about the 6.2 network infrastructure and end user, I am sure a lot of you know what that is, you can look it up if you don't. Now, I know on the other Working Group, there was a discussion last November about this very paragraph, but that was a different discussion; they were talking about how to define what type of network needs to be separately listed. Our question follows on from that. Once you define what networks need to be listed, what information do you need to put in the RIPE database about that network?

Now, this is where wording, I think, comes very important. If you have a policy which is kind of a quasi‑legal document, it's the rules by which we operate, the ‑‑ you all sign up to say you will follow the policies, so if the wording in these policies is or can be interpreted, then you have some problems, because different people will interpret those rules or those words in different ways.

Now, as a native English speaker, when I read this paragraph 6.2, to me it seems very clear. What it's actually saying is that whenever you define one of these end user networks separately, you must have an organisation object. Because to me, that is what those words are saying.

Now I remember when we had the Abuse‑C discussion in the early days, the suggestion that if somebody had a separate abuse contact you would have to create an organisation object for them, that kind of went down like lead balloon. People weren't particularly creating on creating these organisation objects, but to me, this is what this policy says. I would be happy to debate that with you at another time on the other Working Group mailing list. But I just wanted to show that there are complex issues now regarding these points about personal data, and this is not a quick fix.

We need to look at what exactly do we need in terms of contacts? What does the RIPE registry need to be in the RIPE database for contacts? What are the needs of network operators and resource holders to have contacts in the RIPE database? Is it the same as it was 20 years ago? Have we moved on? What are the needs of other interested party? I mean the RIPE database isn't just the techie thing it was 20 years ago. In society now, lots of other factions, bodies, organisations, authorities, think that the RIPE database can answer their problems.

So, should we accommodate them? Can we accommodate them? These are the sort of questions that need to be asked.

There may be a need for some data model changes to solve this issue of personal data. Now, I know there has been some reluctance in the community over the last few years to make data model changes, but to be honest, we have had 18 years with virtually no major change to this design. We have added bits to it and tinkered around with it and modified bits but no major change, so if it is needed to sort out this personal data issue to make some changes, there really can't be any argument against those changes because the RIPE database must comply with the relevant laws.

So given the scale and the scope of what we need to do just to handle personal data, do we need a task force? Now, just to illustrate it, I'm not going doing through all these, but I made a list of some of the questions I think need to be asked, just about contacts. And as you can see, there are ‑‑ this is page 1 of them. There are quite a lot of questions that we do need to ask about contact information and contact details and contacts, what is a contact, what data we need, who needs to contact who, why, when, how, there is issues of organisations who are personal, you need mindset shifts. So, there is quite a few questions there.

So the question I want to throw at you as the community, do we need a task force?

Any questions?

DANIEL KARRENBERG: RIPE database perpetrator. Just to set the history straight, very first version the RIPE database was based on was personal contact manager written by Peter Colinson of University of Kent computer lab, and I just adapted it slightly. And I would even go as far as saying that the data model, the very principled data model hasn't changed for 30 years. And I think the last version, yeah, 20 years.

So, I think it would be a good idea to have a couple of people, not too many, tasked with writing down the purpose of the RIPE database again in a good way and that would make our legal department happy, but we should do it for our own sake, not for Athina's sake, and then develop requirements from that, and what I envisage is like a purpose document that is no longer than four A4s. I would, however, contest that it's a task for this Working Group. I think this Working Group is much too much down in the nitty‑gritty, so I think if you organise it in the ‑‑ from this Working Group, I think you should reach out to at least the operators or RIPE NCC members and routing folks, the RPKI folks, and yes, I will name the elephant in the room, the law enforcement community. And have, like, a group of, well let me less ambitious, ten people or less tasked within a finite amount of time, like six months, or let me be less ambitious, nine months, but no more, come up with a four‑page, four A4 page document that states the purpose of the database and gives enough handles to come down to develop requirements. And then from that, we can go into design. I think it's about time, after 30 years, doing through this exercise, but we have to make sure that all the current users of the database don't get surprises that make them opposed to this. So it has to be at first a small group that just achieve something and then it needs thorough discussion of this sort of purpose description so that it's clear everybody sees themselves in there. It's not an easy exercise but I think it's bigger than the Database Working Group. Sorry for being so long.

DENIS WALKER: That's okay. Do you envisage that happening or starting very soon or is it something that has been thought about or ‑‑

DANIEL KARRENBERG: I have thought about it, but hey, this is RIPE, we can do this immediately. It's just we need to have some rough consensus that this is the way of doing it and we need a convener or two that basically organises this group and we need to talk to Hans Petter I think, just to make sure we don't step on anybody else's toes, but the proof of the pudding will there be people who will put enough of their free time into doing this and coming up with something that's useful? But there's no reason to delay, as far as I am concerned.

DENIS WALKER: Thank you.

NURANI NIMPUNO: Asteroid. So, my very first interactions with the RIPE database was 20 years ago when I worked at the RIPE NCC, and we had to put together training material to teach people how to interact with the database. And then later I became a user of the database and now I have also, throughout these last 20 years, had to explain what the database is to a lot of people who are not in this room. So, I would like to heartily agree with Daniel. I think we need to not start with policy requirements or technical requirements, we need to look at the purpose of the database today, because that has changed, I think we can all agree on that. And then also seek whether, I am not going to comment on time‑lines or how many should be in task force or something else, but I think what I would like to emphasise is we seek input in a broader way from people who need to use this database.

So people outside of this room, whether or not you have people from this room in the task force or not but I think we need to at least get that input into the group that then sets ‑‑ defines the purpose of the database and goes on to requirements, etc..

So, I think that sounds like a very good way forward and I don't think there is any reason to wait, let's start this as soon as possible. Thank you.

Max: RIPE NCC I have another question. Cynthia speaking in personal capacity, another issue is that as my friend has first‑hand experience in, sometimes providers create personal objects for customers and never delete them and they don't get automatically deleted due to their maybe still being a /48 assigned. Do you have any ideas of a button to request deletion of personal objects?

DENIS WALKER: There is a procedure for requesting deletion of personal objects and I think the first step of it is to ask the person who maintains it if they are uncooperative you can come to the RIPE NCC and they will follow the procedure.

Max: I have a couple more but let's rotate.

ATHINA FRAGKOULI: I would like to confirm what you just said indeed, there is this process. Then I would like to add a clarification about the purpose of the RIPE database. Last year in Marseilles, we proposed our legal analysis on the GDPR compliance in the database and this analysis was very thorough and was extensive and we based this analysis on the purpose of the RIPE.database as it was defined by the data Protection Task Force back in 2010 and this purpose is described in the database terms and conditions. It was ‑‑ the purpose as it was described back then. We appreciate that, of course, this purpose can be reevaluated, again and again, and once we have this new definition of the purpose of the RIPE database we will perform a new analysis on the legal compliance and GDPR compliance on the database, thank you.

DENIS WALKER: Yes. I think you will find the purpose includes contacts, but it doesn't necessarily include personal data.

ATHINA FRAGKOULI: I have to again repeat that in line with our legal analysis contact details are indeed needed in line with the purpose as it is defined, thank you.

Max: From escrow. The RIPE NCC forcefully creates a personal object for admin‑c and text‑c and all text‑c, for every new LIR creation. That's a few thousand of objects per year. It should allow the usage of an existing object personal role and not create soon to become stale data. Also it creates duplicates objects when someone creates multiple LIRs, so they could at least recycle the objects they have already created for the previous LIR.

And the question is: Can I ask someone at the NCC if there is any plan to revise this process?

FELIPE: Thanks Elvis for bringing this up. We are actually doing this already for favouring role objects instead of personal objects and now we are planning to review the process for CS, especially for new LIRs to favour role objects as well. So plan should do this over the next few months.

PETER KOCH: DENIC. I will do one brave thing which is disagreeing with a lawyer. My recollection of what the RIPE database task force did and I was part of it back then, is not necessarily what we would today call a purpose ‑‑ we were talking about uses of the database and that's very important distinction, and that brings me to the other point I'd like to of course agree with Daniel and also with what ‑‑ and also with what Nurani said except talking about the purpose of the database which immediately brings us into GDPR again and the purpose of the database is very much linked to the purpose, role and function of the NCC in the first place. So that might be even a bit of a bigger discussion and framing there is not lightweight task, so getting into that too quickly might be a bit dangerous. Other than that, Daniel talked about the elephant in the room, I would like to add the mouse in the room in comparison, let's not repeat mistakes other organisations have gone through, because this ‑‑ the discussion of the purposes and then the very important distinction between purpose and use cases, is very significant for both GDPR and compliance on top of it. I think it's important to have data protection be part of the discussion.

DENIS WALKER: Daniel, do you want to give a quick response to that and then, Max, your last question.

DANIEL KARRENBERG: Just a clarification. I did not mean purpose in what I said, we should revisit the purpose of the database in the narrow sense of personal data protection. I mean it in the full holistic sense about what are we using this thing for and I am totally aware that use cases ‑‑ use cases and ‑‑ purpose, formal purpose definitions are different things but we need to look at both. I am really, what I mean as a holistic approach, not limited to either the presentation we heard or personal data protection.

Max: I have a quick comment from Elvis. I believe we do need a task force, it's just asking the Working Groups have not worked, getting answers to all those questions will take too long or we will have to come up with a policy proposal that may not be what community wants.

And also a quick question from Nick speaking in personal capacity: Is there currently a way to become a RIPE member organisation without the personal data ending up in the RIPE DB?

DANIEL KARRENBERG: Yes, there is if you don't want resources.



EDWARD SHRYANE: Cleaning up locked persons in database. Thank you, Denis, for bringing this up and for highlighting this issue.

So, firstly, I am clearly not a legal expert or GDPR expert and there is this bigger discussion to happen around personal data. But I'd like to concentrate on one thing that concrete thing that we could do. There is a history of locked persons and role objects in the RIPE database that we could now do something about and it's becoming ‑‑ clearly becoming more urgent we do something about these objects.

There is one slide, kind of summarises the problem. In itself, I mean there are over 2 million personal objects in the RIPE database as Denis said and it turns out one‑third of those are locked and they are locked by a RIPE NCC maintainer. It's kind of gives a context.

How do these persons become locked? And the kind of there is some history here. 2001, the current incarnation of the RIPE database came into existence, and existing objects were imported in. At the beginning a maintainer was optional on objects in the RIPE database. Fastforward to 2010 and the data Protection Task Force, a maintainer became mandatory for personal and role objects in October 2010, the data Protection Task Force requested mandatory authorisation for maintaining personal data in the RIPE database.

So at that time and between that time and 2016 some person and role objects remain unmaintained. No clean‑up was done at the time, and so the ‑‑ those objects still exist, a lot of them. However, in that time frame no contact details were altered on any unmaintained persons. So that's kind of the good news, the name, postal address, e‑mail and phone stayed the same.

In 2016, this came to a head and we were told by the Board to lock the remaining unmaintained personal role objects. There is an increased risk of hijacking IPv4 run‑out and a resource becoming more valuable. So unmaintained person and role objects and also the issue of having personal data in those objects.

And at the time, a personal ‑‑ affected parties were not contacted, because we couldn't verify if the e‑mail address was still accurate, and in any case, only 25% of those persons had an e‑mail address. It's optional on the person object. It was a huge job for the RIPE NCC to correctly check all claims on unmaintained database objects. It would not be possible to assist users to unlock those objects. And the procedure that we do follow now is, if we are contacted about this, that we advise the user to create a new person or object and clean up references to the old person and that would become unreferenced and then cleaned up separately.

And at the time, it would have been a substantial operational burden to respond to queries resulting from a mass notification.

And at the time there was 850,000 person and role objects.

Forward again to 2019, to the ‑‑ to now, there are still 635 locked person and a fraction of that are role objects so in total 75% of these unmaintained locked person objects still remain in the RIPE database.

So that's a third of the total.

So I'd like now to present a couple of ideas around cleaning up these locked persons. Firstly, the RIPE NCC should not be responsible for locked person objects. We should be closer to the correct maintainer of the data. And if we can identify the responsible organisation for these locked persons instead, and unlock the person by assigning it to that maintainer we can move forwards and then at least have a chance of the proper contacts being able to review whether that information still needs to be in the database. So we make it the responsibility of the organisation.

Looking at references to locked objects, the vast majority of these references are within ‑‑ from inetnum objects and domain objects. This is 98% of all of the objects and both these object types are hierarchical, so there is some chance of finding at some point up the tree a responsible organisation for the reference. So it's possible to identify who owns the locked person using this reference.

So, it's possible, looking at the INET NUMs inparticular, and they are over 90% of the references, for INET NUMs, over 98% of these are referencing locked persons from an assigned PA status so there is an assignment in the database that is referencing the person. And using the hierarchy there will be an LIR organisation above that. And it turns out that just 10 responsible LIR organisations account for three‑quarters of all of these references. So it would be possible to contact a few of these LIRs and make a large difference in the bulk of these unmaintained locked person objects. And the idea would be to review with them the need for these references. And the goal would be to improve the daily accuracy in the RIPE database with this review.

Legacy is a tiny amount of this but we can consider that separately because it can have a different hierarchy and responsibility.

Okay so there are other ways of finding a link between person ‑‑ locked person and references. The vast majority of these locked persons are only referenced from one object. So firstly the creation time, and it turns out that most of these references and persons were created at the same time. So, nearly three‑quarters were created within the same minute, and I just use the created date that is already in the database that is on all of the objects. We can go back to the update logs to do ‑‑ to be completely accurate, look at sync updates, the source IP address or for male updates the e‑mail address to tie these together, to prove a link between the referencing object and the locked person. So we are absolutely sure that there is a connection between the two.

Then automatically assign the referencing objects maintainer to the locked persons.

So there is separately, we could contact these persons directly but it turns out that only 20% of these persons have an e‑mail address because it's optional, and in any case, these persons may not be able to do anything themselves. They may have no authority or any access to the RIPE database to do something, and this may be complete news to them, they may have no idea that their data is still in there in any case.

So, it's something that it would, if we can do this in bulk and have a contact between the RIPE NCC and the responsible organisation, we could make faster progress on this.

If we were to contact persons directly there would be obviously a high workload to do this and try and resolve claims one by one.

Separately, there is a possibility of deduplicating locked persons and it turns out of these 600,000 locked objects there are a lot of duplicate person name plus phone combinations, person's name and postal address and person name and e‑mail address. This could be a separate step where we combine all this information and try and delete duplicates.

And this we could extend later to other references as well, but particularly for locked persons there are a lot of duplicates there. So it may in itself, we may be able to reduce the size of the problem.

Finally, if all else fails we can replace the remaining references to any locked persons with a dummy value, there is an existing DUMY role object in there. The locked persons themselves will get the deleted, that is the upside. Lots of DUMY references reduces the data accuracy and the organisations and resources do need to have valid and accurate contact details so we can't just leave a DUMY reference because chances are it will stay that way and we are trading one problem for another.

Benefits of a clean‑up: The responsible organisation assumes responsibility for the objects they should maintain. The RIPE NCC no longer seems responsible for maintaining these objects, and in general, we will increase the data quality of the RIPE database. Inaccurate ‑‑ there is a chance that it be corrected, opportunity for it to be corrected and unnecessary data it should be deleted and we are cleaning up personal data as well that perhaps shouldn't be in there.

And this is just the first step in dealing with the volume of personal data and it's by no means a complete solution or a solution for the volume, but I think it's a start and it's something that we can make concrete progress on.

Any questions?

DANIEL KARRENBERG: I have a question for clarification. You talked about bug updates. Does this in any way imply that the new maintainer does not have to be notified ‑‑ let me make it stronger ‑‑ does not explicitly agree to be the maintainer? In other words, is there any way someone could become maintainer of an object without having agreed to it first?

EDWARD SHRYANE: No, that would obviously be a very bad idea.

DANIEL KARRENBERG: Thank you. Then I misunderstood what you were saying.

EDWARD SHRYANE: To be clear, I think it would be necessary to do this in cooperation with the LIRs, announce it on the Database Working Group, send emails in advance and work with LIRs one by one to do this clean‑up because not only is it an opportunity to reassign responsibility, it's also an opportunity to delete these references that no longer apply because remember these locked objects are, it's old data between 2001, before 2001 and 2010. So it really is, it's at least it's coming up to ten years old and older data.

MAX: RIPE NCC. I have a question from Elvis, actually a comment. I believe that updating the data to DUMY data is the right thing to do. Then contacting the list, they have resources referencing these objects or clean out the ones that are not referenced in resources. And the DUMY data should be the last step. All these steps that you mentioned sound great except for the one when would you contact each individual. (You would).


NICK HILLIARD: From INEX. Do you have any information on how many or what's the volume of person, or at least locked objects which are currently being unlocked by the end user at the moment?

EDWARD SHRYANE: Yes, so there is ‑‑ this backlog is slowly getting smaller so over time there are less locked persons in there. Once the reference is removed the locked person will be deleted automatically, and at any one time, when I checked there was about 10,000 unreferenced locked persons, so they will be cleaned up in due course, in 90 days. So this progress, this is progress that's happening but at a very slow rate.

NICK HILLIARD: Okay. That's good. Second question was just with the duplication of person objects, are they referencing different maintainer objects or what's the issue there?

EDWARD SHRYANE: It's just purely within the locked persons. So there is a lot of duplication but it's something that we would have to be completely sure of that we are identifying the same person, I think, so it's an extra step but it might help solve the problem.




Next up, Nikolas from RIPE NCC is going to talk about country code and proposal about them.

NIKOLAS PEDIADITIS: Hello everyone. I am part of registration services team at RIPE NCC. And this is about country codes.

We raised this issue at RIPE 77, we discussed it with you. Since then, we did not really receive a lot of feedback, nor a proposal on how to address this. So we came up with one, I would like to present it to you, see what is the feeling in the room. If the good enough to fix the issue that we see, and then if all is good, we man to submit an NWI item, initiate the process and take it from there.

I will do a brief recap of what happened in RIPE 77 in case some of you were not there or you did not follow the discussion, so you can align and facilitate your discussion later on.

Starting with the problem statement. So, what is a problem that we see?

There is unclarity for the meaning of the country attribute in the RIPE database and our extended delegated statistics. That creates problems. There are a lot of people that don't really know how this attribute is supposed to be used and it also creates inconsistencies, especially between what is in the RIPE database and what is in our extended delegated statistics.

To give you some background, originally or if you want initially, the country code was supposed to refer to the country where the network is located. That was the initial intent. Until recently, those two would match. So the country where a network is located and the country where there is legal presence of the resource holder, would be in most cases the same. However, networks are become more and more global. We have a growing number of members outside of our region and we have a big increase in requests to change this country code attribute in our extended delegated statistics, and in some cases there is a need for those requests, is one that makes us not entirely certain whether we should do it or not.

So if we look at those two data sets separately, if we look at the RIPE database, there is a document and thanks to Denis for pointing that out a few days ago, RIPE‑50 from '92, I believe it's written by Daniel Karrenberg. It's now obsoleted, the country data in RIPE database is country where is located. If we look at the RIPE database manual of nowadays, it says there are no specific rules defined for this attribute. It cannot, therefore, be use used in any reliable way to map IP addresses to countries".

And if we look at our extended delegated statistics, I will make a parentheses here, for the people that might not be aware, this is a file that was created in 2008 between RIRs, and it was an effort to create a joint standard format and a way to publish our reserved and free other space together with unallocated and eye aye signed. This was not a file ever meant to be used for geolocation services, for example.

If we look at the documentation of our extended tell gaited statistics it also says "it is not specified if the country, the country code that we have there, is meant ‑‑ is meaning the country where the addresses are actually being used."

So we have this uncertainty in both data sets. And if we take a look at what the other RIRs are doing, in most of them the country code actually means legal presence. In some of them it's one or the other, but in general in most of them it points to legal presence.

So these are things that we already discussed in RIPE 77, it was just a brief recap of what happened there. So we came up with a proposal.

What if the country code in resource objects in the RIPE database remains as is. So it is maintained by the resource holders and they can choose the criteria in order to decide what to use.

There is a new country attribute introduced in the ‑‑ and this can be the country that already exists but is not present and that will point to the legal presence of the resource holder.

And then this country code, this attribute will be maintained by the RIPE NCC and if one needs to update it we can follow the same process as we do when an organisation changes the legal name, we ask documentation to verify that indeed an organisation moved from one country to the other, for example. And then this country code is reflected in the extended delegated statistics.

And then this file will become the country code in this file will become the one pointing to the legal presence of the resource holder.

Why this proposal, why we thought of this proposal or go that way?

First of all, it establishes a clear definition for the meaning of the country code in our extended delegated statistics. And it's something that we can verify. The legal country of resource holder we can verify and we have something we have already when we issue resources to someone. The location of a network it will be difficult, if not impossible, for the RIPE NCC to verify. At the same time this will bring in sync what is in the RIPE database in the organisation objects with what is in the extended delegated stats, while allowing resource holders to retain freedom on when to choose, what country to choose in and it will bring a bit more consistency between the RIRs.

The idea between this proposal is to improve the quality of our data. I think it's pretty clear to everybody the quality of our data is the most important thing we have. You can accept this proposal, you can reject it, you can tell us if you want to change something in it, it's really up to you. But it's the thing that we can verify and then it's something that we can say okay, now we know this is at least accurate.

Any questions?

SHANE KERR: From Oracle Dyn. I wrote some text when I was right DBM like 18 years ago saying basically that the country code is useless and don't rely on it. So, I'm in favour of this migration but I would propose maybe going a little bit further and, at the very least, making the attribute optional in places where it exists today, and possibly coming up with a different name so we don't have an attribute which has different meanings depending on which object it appears in.

So there had been proposals in the past but like geohints and things like that, there is a strong desire for people to both be able to label their resources and for people like looking them up to get some hints about where they might be located. I don't think we should ignore that. But I think we should try to be very clear about the limitations of that and definitely keep it different ‑‑ keep a separation between country information in resources in resources and country information in legal entities.

NIKOLAS PEDIADITIS: That will be also an interesting proposal and will be one way doing. So, eventually we can have separate proposals or just pick one and then the Working Group hopefully will decide which way to go forward. We don't have a strong feeling, it's just that if we want to keep this attribute it might be as well an accurate one, something we can say, okay, it's correct. That's the...

WILLIAM SYLVESTER: Thank you so much.


Next up is Job, he is going to give us an RPKI update.

JOB SNIJDERS: Good afternoon, I am Job Snyder from NTT Communications and I stand here today in capacity as co‑author of the 2008‑06 policy proposal that is circulating through the policy development process in the Routing Working Group, but since it's closely related to database work it is appropriate to share an update with the Database Working Group and keep you informed on progress or like of progress.

So a while ago, we through the work done in NWI‑5, the RIPE IRR database split into two separate components. There is now RIPE as IRR source and it contains only information that has been created with the explicit consent of the owner of IP resources. And separately, there is the RIPE none of IRR source and this is basically a historic artefact and it contains data that perhaps is of relevance to operations, perhaps is outdated, perhaps was put there for malicious reasons, perhaps was put there as a typo, it's hard to know. And many of the entities behind these IRR objects in the RIPE non‑authoritative database cannot simply be reached.

So, we came up with the idea to use RPKI to clean up the IRR, and I think this is a graceful approach to introduce a degree of data entity without intruding too much on people that may depend on objects there.

What we propose is that the RPKI origin validation procedure as described in RFC 6811 is applied to IRR route objects in the non‑authoritative database when we look at origin validation, commonly it's applied to BGP updates, we take the prefix, we look at the origin ASN and match that against the data that comes from the RPKI, and then you conclude valid, invalid or unknown, if there is no covering ROA.

I think we can do the same thing with IRR data. And that it will be of benefit to our community.

Let's take an example. This is a real example, this is not contrived, this is somebody that was experimenting with the non‑authoritative side of the RIPE database, and this entity registered a route object for a /24 that covers entity IP space. This is IP space that is not managed by RIPE NCC, this is IP space managed by ARIN. And this route object was created without entity's consent.

Separately from this, entity has created RPKI ROAs covering a /16, where the max length attribute is set to 16 and the origin is set to 2914. You can see that this ROA does not match with the route object that was created here. What has happened, this route object is in conflict with an RPKI ROA that we published, and the ROAs are a higher source of truth by definition through the semantics of what a ROA means, or in other words, origin validation can be done with RPKI as input, not with IRR's input.

So we have a formal proposal, we have started the PDP process, we are now in what I believe is the discussion phase. This will run its course in the next few weeks and then depending on the discussion on the mailing list in the Routing Working Group, RIPE NCC can perhaps proceed to make an impact analyses.

And it's very important to keep in mind that this proposal only exists to help remove LACNIC‑managed or ARIN‑managed or AFRINIC‑managed or APNIC‑managed IP space from a database publication managed by RIPE. So this does not affect people that have RIPE‑managed resources. This does not affect people that have legacy space and signed an agreement with RIPE NCC to manage certain aspects of the legacy space.

This also does not affect you if you are unable or unwilling to create RPKI RAOs. So, a lot of the data in this non‑authoritative database will continue to exist because there simply is no hint or suggestion that it's correct or incorrect.

So the scope of this proposal is quite narrow and it's so narrow that it's almost strange to be discussing this with the RIPE community where the affected parties are not RIPE people.

So, there is a few things I want to highlight. This is the second version of the proposal. The differences between the first version and the second version can be summarised as follows:

In the first version, there was no time delay between detecting a conflict and acting on the conflict. In the new version, thanks to feedback from this community, we proposed that between the detecting an IRR route object that is in conflict with an RPKI statement as published through one of the five RIRs, that between detection and deletion there is a period of seven days.

In those seven days, perhaps the relevant parties may want to take action or not. They will be informed by using the relevant notification attributes if they are present on the route object. So if there is a route object that is in conflict ‑‑ let's see ‑‑ this is not a good example. But if there is an e‑mail address then a notification will be sent, hello, seven days from now this object will be deleted, please take corrective action if appropriate, or not.

I have wrote a small piece of software, that's the link below that will print what will happen if this policy comes into effect. So it will list the invalids, the conflicts, unknowns and this allows us to identify ‑‑ to analyse whether it will impact your operation or not.

Here is is an example of that tool. I am running the binary, that's the RIPE proposal to ‑ 2018‑06, with 7018, only show me conflicts relating to 7018 which is a AT&T managed by the ARIN registry. What we see here, it downloads the relevant databases, it runs the validation procedure and it shows the route objects that are in conflict with the RPKI ROA and it also lists the ROA. What we can see from this example in the RIPE non‑authoritative database, a combination of a prefix and origin ASN are listed that are impossible from an RPKI origin validation perspective. So use this tool and try to figure out if you are affected or not.

This concludes my update.

We have one minute left. So discussion should happen in the Routing Working Group if we want to adhere to RIPE's PDP process.

WILLIAM SYLVESTER: Real quick. The coffee break is right about now. We will open up some questions, just before we let everybody go, we do one more chair slot open for the database and we will be kicking off the chair selection process after the meeting, we will see on the mailing list, and with that go ahead and take questions.

RUEDIGER VOLK: Has there been any activity in sending out to people who may be related to stuff you want to delete to inform them about it?

JOB SNIJDERS: To a degree, yes. There are entities, unrelated to RIPE NCC, that have emailed people about these conflicts if we have no contact details. I have published this tool with the intention that people can very easily research themselves whether they are affected or not. And I would be consider changes to the policy proposal to make that a formal part of this process. So the answer is, kind of half asked.

RUEDIGER VOLK: Kind of I would very much prefer that a systematic effort in communicating and pushing out these notices be done before trying to do the policing.

JOB SNIJDERS: I think we captured that, but maybe not entirely aligned with how you view it, by saying notifications are sent and then seven days later a deletion takes place, if we know who to notify.

RUDIGER VOLK: Okay. If I take that kind of by the letter, that means the policy goes into effect and seven days later people are seeing lots of stuff in their inbox and things disappear.

ROB SNIJDERS: That is the current proposal but we can change it, should it be 14 days, should it be a month? I don't know. Let's talk about it.

DENIS WALKER: Co‑chair. That was actually going to be my point, I was going to say if you are going to send notifications out, seven days I don't think is enough and historically over the many clean‑ups we have done over the years, usually a month would be a realistic proposition for giving people time to react.

JOB SNIJDERS: Since I already conceded that there should be a notification period, the seven days is arbitrary. So if the community feels that 30 days is better, let us know and we will update the policy text.

ERIK BAIS: Denis, a question for you. One of the co‑authors. How about before policy is being implemented, everyone will get a notification and then the seven days for, you know, what is going on?

DENIS WALKER: I think again if you are going to send a load of notifications out and at some point later do something within a seven‑day period, people will have forgotten about it. The way we have normally done clean‑ups in the past is when you are about to do that, you give people a notification with enough time to actually respond and react to it at the point where you are going to delete it.

ERIK BAIS: Yeah, but the ‑‑ the notification is going to be sent seven days after they created the radio so if there are changes in the ROAs, then they will get the notification for seven days so they have actively done something and that relates to getting a notification.

JOB SNIJDERS: That is very good distinction. We have two events that will happen. The event when the policy comes into effect as Rudiger mentioned, and then seven days later something happens. At this moment that would affect roughly 700 objects out of the 69,000. And then in the future, and this could be a year from now, if somebody creates an RPKI ROA and at that moment the IRR object comes in conflict, you again have that seven‑day or 30‑day time‑line. So we should understand there will be two events. Or not be an event.

DENIS WALKER: Fair enough.

MAX: RIPE NCC. I have a question from remote participant. Sandy Murphy speaking in personal capacity. There was a proposal in the IETF SIDR Working Group with one RIPE member as an author, but using RPKI signatures on RPSL objects. It even got published as an RFC, 7909. Did that ever get traction in RIPE? I have always wondered. RFC title was securing routing policy specification language objects with RPKI signatures?

JOB SNIJDERS: The short answer is, no.

MAX: Great. Another one from Jen Dixon son from TFM: Will the seven‑day hold period include a recheck of a conflicting ROA in case it was removed before Route‑6 object removal?

ROB SNIJDERS: That is a great intention. My intention would be that the RPKI ROA has to exist for seven consecutive days, if it would disappear, the whole thing is cancelled. So this could be a choice that people make when they get the notification, they say oh, this is actually not what I want, let me delete the RPKI ROA and return to this later. But yeah, it needs to exist for consecutive through the whole period. And that's actually not part of the proposal as written so I should transfer my thoughts into text.

WILLIAM SYLVESTER: Thanks so much.


Special thanks to the RIPE NCC for scribing and taking care of all of our every needs and ‑‑

NICK HILLIARD: Nick Hilliard, just a quick item on AOB. Something that just occurred to me about deleting stale objects. If an address block or ASN or some other sort of number resource is transferred from one organisation to another, as a resource transfer, then that, it seems to me, would automatically invalidate any root objects, alien or inetnum objects in the RIPE database so maybe we should be looking at transfers from other registries, taking those and deleting the equivalent entries in the RIPE database. If they are ‑‑ if the date of the entry is appropriate.

WILLIAM SYLVESTER: I think we proposed something like this a couple of years ago, I think I proposed it.

JOB SNIJDERS: NTT. Nick, can you clarify if you are referring to things in RIPE IRR or RIPE non‑authoritative IRR?


WILLIAM SYLVESTER: Great. Any other business? With that, thank you very much. We will see everybody next time.