DNS Working Group session
Thursday 23 May 2019
CHAIR: Hello. Welcome to the DNS Working Group session at RIPE 78, eye name is Jao Damas, I am one of the co‑chairs. And Shane Kerr my two other co‑chairs are sitting here. I'd like to get started. Let's go through the agenda first.
We published the agenda, it's on the website. Does anyone want to add anything at this time? Not. Last week Shane was kind enough to provide a copy of the minutes from last RIPE 77 to the list. We received a couple of comments with a few changes, which have now been fed back to the RIPE NCC, does anyone have any other comments? No, okay, we'll be publishing that as the final minutes.
Let's ‑‑ the RIPE NCC as usual is providing a scribe, thank you. And we also would like to thank the stenographers for their usually really good job.
First speaker today is Anand giving us a quick update on the RIPE NCC data service.
ANAND BUDDHDEV: Good afternoon everyone. I am Anand Buddhdev of the RIPE NCC and I am here to do a quick update on some of the stuff we have been work on an and what we're planning to do in the coming months. First of all, we have had a few changes in our team. Some colleagues have left us and new people have joined us, so I thought it would be appropriate to introduce you to the facing of DNS at the RIPE NCC. Other a team of six people from different countries and it's a nice diverse team. I'd also like to point out that Florian, who is on our team is also a developer in the Open BSD project as a hobby, and he has lately been working on a project called unwind, which you will hear about more from Carsen Strotmann later.
So, at the last RIPE meeting, I mentioned about our dynamic signer migration project. I want to give you an update. We switched away from our secure 64 signer appliancess to Knot DNS and we achieved this using a KSK roll. Essentially what we did was import the KSKs were each signer into the other and so we had a period of double KSKs and this allowed us to transition from the old ones to the new ones quite smoothly.
At the last RIPE meeting in Amsterdam we were actually in the middle of this KSK roll over and I had put up some slides showing this and just after the RIPE 77 meeting, we went back and we withdrew the DS records pointing to the KSKs from the old sirens and that allowed us to (signers) complete the KSK roll and switch fully to the new signers, so we're still using the Knot DNS signers and we're quite happy with them. We are currently on Version 2.7 of Knot DNS, and this one does single threaded signing, and when we feed it our largest reverse DNS zone which is about 50,000 resource record sets, it takes about 23 seconds to sign this fully. And I mean, this isn't an especially large zone, so 23 seconds doesn't seem like much. But the smart people over at cz.nic have released 0.28 does multi‑threaded signing which is signed in just 3 seconds. Thank you for this extra improvement. And at some point in the coming months we will be upgrading to Version 2.8.
The RIPE NCC has a secondary DNS service with VeriSign, and this is mainly for DDoS mitigation. So if there is an attack against the RIPE NCC's own DNS servers, which serve ripe.net and other important zones, then having a secondary provider with extra capacity ensures that the domain name ripe.net keeps resolving so that our services remain available.
VeriSign has sold this part of their business to Neustar, and so, our contract has been moved to Neustar. And we're actually evaluating Neustar service at the moment. We have until October to decide whether to keep this service or to discontinue or to switch to another provider perhaps. So we are considering this now, and in the coming months we will make our decision.
K‑root continues to keep growing slowly. Since the last RIPE meeting, we have added four new sites in Thimpel, which is in Bhutan, and this has been sponsored by APNIC, because this is in their service region. We cooperate with some of the other RIRs such as APNIC and LACNIC in deploying new K‑root sites. We have also enabled a K‑root server in Madrid, Luxembourg and Taipei in Taiwan, and this one in Taiwan is quite interesting because it gets up to 8,000 queries per second, and as you can see, most of these queries come over IPv6. So this is one of our more unusual instances, and we think that this is because one of the largest ISPs in Taiwan has v6 enabled DNS resolvers and they send all their queries to us over IPv6.
Moving on, CDS/CDNSKEY for the reverse DNS. That's an RFC 8078 which defines a process to automate updating a signed child zone's DS record in the parent zone. And this makes the deployment of DNSSEC easier. At the last RIPE meeting or community said to us that they would like us to implement this RFC for the reverse DNS stream so that those who have sign zones and who publish CDS/CDNSKEY records don't have to update their domain objects by hand. We have been a little bit busy so we haven't had much time to work on this, but we would like to do a lightweight implementation of it in the coming months, and what we propose is that we are going to track the child zones that already have DS records in the parent at the moment, and for these child zones if we detect CDS or CDDNSKEY records and they are DNSSEC validated then we will automatically update the domain objects of these users in the RIPE database with new DNS records. I would like to stress that we are not going to scan the entire reverse DNS space looking for signed child zones. So that if a user signs their zone for the first time and would like to establish a chain of trust, then this process will still have to be manual as in they will have to update a domain object in the RIPE database. After that, maintenance of the DS records should be automatic with this new feature.
And that's it from us. I welcome any questions or comments.
AUDIENCE SPEAKER: Andrei, sis net. Hello, as a community requesting CDS DNSKEY support I'm happy with that lightweight implementation. I think it's more than enough because for reverse DNS this is a little bit different situation than with forward DNS where people are usually ‑‑ people who are buying forward DNS domain usually aren't even not like technical people. But for reverse DNS delegation everybody who is doing reverse DNS is actually knowing what they are doing so they should be able to use the first set of the DS records an then just don't care forever, so I'm very happy with that lightweight support. Thank you.
JOAO DAMAS: Anyone else? Thank you.
Next up is Marco on the ENUM instructions.
MARCO HOGEWONING: Good afternoon colleagues, I work for the RIPE NCC, I work in the External Relations Department which kind of means you do a lot of things apparently including DNS. Who ever remembers ENUM? Actually quite a lot. That's good. That can kind of ‑‑ I can skip this slide. Yes, so ENUM is a protocol used to transpose telephone numbers into DNS labels, and as such the RIPE NCC somewhere around the turn of the Millennium stuck its hand up and volunteered to operate the global E.164. This is what this is about. The current status of ENUM. This is the public ENUM, arpa which is part of the DNS root zone. There is also a private ENUM. This is about the public side and in the public side we have got 57 delegations which are based on country codes or special services they are called like satellite phone operators. And this started with somebody phoning me up saying hey, I kind of within a to get rid of my public ENUM registration, where is the procedure? I said that must be online. It turns out it wasn't. Anyway, we started looking into this and found that although there is very limited use, there is still people asking for domains in this zone.
So, going a bit further, looking over time and working with our colleagues from registration services, it turned out that, yeah, over the last 15 or 20 years, that we have been doing this, out of the 57 registered domains, 22 developed some sort of issue ranging from database objects being deleted, servers going lame, there is a temporary thing that was never renewed but hey, people are asking for it, it is still working, so we better keep it there before we break something. There was a lot of, yeah, a lot of garbage, mess.
So, we operate the zone on the the instructions of the IAB, so last year, last summer, the Internet Architecture Board said, hey, you provided us instructions but there is some stuff missing and it caused a bit of operational problems these days.
They shared our concern, so, a new set of instructions got drafted. Most importantly, we have added some text on how to do deletions, the other thing then as we covered deletions, we have also added some text that allows us, as the operator, to do some audits, some checks, is this still alive and if it's not alive maybe we should start working?
Now the trick here is because it's based on e.164, it's based on phone numbers. The delegation itself has to be approved by the members, so there is a whole process that coordinates this with ITU, TSB that in turn coordinates with the member states that say yea or nay. So once we had these instructions done, we have also interfaced with study group 2, hey we're going to do this, study group too also did their bit.
The good news. Everybody agreed, so, the new instructions have been published on our website. They have also been mailed to the Working Group list. ITUT, the study group 2 in turn updated their instructions to the TSB. So, we can actually start looking into the faults. We will try and recover any technical issues, we will try and recover any lost contact points, but importantly, and that was also in coordination with SG2, if we can't resolve it, eventually we will advise the member state that it's better to delete the zone and, you know, wait for somebody to show up and say hey like I can operate this registry and then ask for a new delegation, than it is to keep lame delegations or keep out dated objects in our database.
Why am I here? Well, you all remember ENUM. Some of you might remember requesting it a long, long time ago. If you were one of those, we would appreciate it if if you would contact us and help us clean this up. It's only a handful of registrations but in terms of data quality we can probably come a long way if some of the people that have been involved contact us and help us to, you know, provide a chain of custody, reauthenticate delegations and make sure either things get removed or things get working again.
That's it from me. Happy to take a few questions. Otherwise, please mail ENUM requests that will go to my colleagues in Registration Services which are happy to deal with your old delegations and happy to help recover them and get them back into a working state. Thank you.
JOAO DAMAS: Thank you Marco.
AUDIENCE SPEAKER: Peter Koch. Thank you for reporting and even more thanks for getting these things straight.
One question regarding the technical part of the work that you are now entitled to do. Would that ‑‑ and I'm now combining two dead horses ‑‑ would that actually allow DNSSEC signing down the chain or is that something that would have to go up ‑‑
MARCO HOGEWONING: I am kind of eyeballing ‑‑ I do think we are DNSSEC signed already and in fact one of the problems we found is that one zone is currently no longer properly DNSSEC signed, while it's mandatory per RFC. But we're happy to take this off‑line and look into it, but I do think these are supposed to be DNSSEC signed and we also have the infrastructure to do that.
PETER KOCH: Thank you.
JOAO DAMAS: Anyone else? Okay. Thank you Marco.
CARSTEN STROTMANN: Hello. So, I made a little survey of Open Source DNS privacy programmes projects landscape, so I work for men and mice here in Iceland, and there are new privacy protocols developed for DNS, and I wanted to know if these new protocols also spark new software projects or is it still the same usual suspects that work on DNS software? These privacy protocols, they sparked quite a hot of debate in the IETF and the larger DNS community, but this talk is not about that. So this is not about the politics around that. It's about the software and the software projects that I want to look into.
So I wanted to compare if the new RFCs, how they may be spark and when do they spark new software projects over time? What number of projects do we see compared between the two main new protocols being DNS over HTTPS and DNS over TLS, and what programming languages are being used to implement new projects in the DNS space.
Now, a short refresher on the DNS privacy protocols. DNS queries and responses are used to spy on users. Traffic can be altered during transport between the resolver and the end client because it's not secured by DNSSEC if DNSSEC is deployed at all. And the DNS queries can even be blocked in certain networks to implement some kind of censorship. This is the goal to, of these two new protocols to mitigate these problems. And we have DNS over TLS, which is the a little bit older new protocol which has been standardised in May 2016, and we have DNS over HTTPS, which is the younger one, which has been standardised in November last year, 2018.
So, DoT, DNS over TLS, defines how we speak DNS, which is the DNS that we all love and use just not over UDP but over TCP and that TCP is then secured with TLS, which is the same encryption technology that is in HTTPS.
It has a dedicated port 853, and it gives us encryption and authentication. So, we can, by using that, make sure that nobody looks into the data or nobody is altering the data, and we can also be sure as the client that we talk to the right server. If we authenticate the server. The protocol allows that.
And DNS over HTTP is also DNS, the same DNS that we use over UDP today, but it is wrapped in TCP or transported over TCP and then wrapped in HTTPS which is the same protocol we use for browsing, and that doesn't have a dedicated port because it's HTTPS it works over the HTTPS port which is 443, and that also gives us encryption and authentication. So, these two protocols are quite similar.
So, here in pictures how that looks like, this is classic DNS over UDP or TCP, a client sends a query to a DNS resolver and that resolver, if it doesn't have the answer in the cache, goes up and authoritative service in the Internet.
Now, we are talking here about the transport between the client and the resolver. This is not about resolvers talking to authoritative services, only the green line here. And on purpose, I have made it now that the green line is not going to the DNS resolver down there in the network of the client, but it goes into another network somewhere in the Internet because that is the reality today. If someone is using DNS over TLS or DNS over HTTPS today, most of the time that traffic goes into the Internet to some service provider offering these services because I have not seen an ISP in Europe deploying these services. I have not seen everyone. So if here is an ISP I would like to hear that they may be deploying that, it would be interesting to know.
This can also be used in the forwarding way. So if the local DNS resolver doesn't directly speak to the Internet but wants to forward all the traffic to an upstream resolver, that transport way can also be secured by the DNS over HTTPS or DNS over TLS.
Now, what are the differences if these two protocols are very similar? Now, DoT or DNS over TLS runs on a dedicated port and because that have it can easily be blocked, or the other way around, usually that port is blocked in, especially in company networks. So, getting DNS over TLS to work is quite a challenge, or for the normal usual impossible if the user doesn't have control over the firewall that is in their network.
On the other side, DNS over HTTPS just looks like normal web traffic. So, it is especially designed to not look different, and it goes over port 443 and that is usually open because in networks today, everyone uses browsers and hopefully everyone uses encrypted transport, hence HTTPS.
Programming languages usually have today libraries that bring HTTPS functions with them. So, one would expect that it is easier for developers to implement HTTPS and DNS over HTTPS than it is to implement DNS over TLS because the library functions are already there. And DNS over HTTPS enables developers to do name resolution on an application level completely bypassing the operating system and the policy that the in the operation system for DNS name resolution, some people, including me that is not optimal.
So, back to the survey. Now, I am not a scientist, so this is not an all inclusive sure survey, just some data that I found on Gitlab and GitHub. I did it in May, this month. And I only only counted projects that have software components which were not composition projects, like, I didn't count if someone had a Docker file that set up a DoT server inside Docker but where the DoT server itself is not in the project, its some software I pulled in from other site. On this URL, which is exactly also a DoT and DOH server, implementations dot HTML have a long list, and that will grow, of all the implementations of these two protocols that I am aware of. If there is something missing in the list and you think it should be there, please send me an e‑mail, which is on that list.
So, comparing all the projects, how many projects will we see for DOH, DNS over HTTPS, which is actually 32 projects, which is the bigger chunk. But DoT has 23 projects. Not bad.
When have these projects been started? So I looked in either when the project had been started in the beginning if they are projects that had either of these protocols in the beginning, or if there was an existing project but that implemented DoT or DOH somewhere on the way. So, it started all in 2015 with three projects coming in and two of these three projects were the usual suspects I would say, people who work in the IETF and RIPE community and were already working in the standardisation process. One project was there, at least to my knowledge, with which had no connection to the IETF or to the RIPE community here. There was only one in 2016. 2017 we had much more. And 2018 saw an explosion of new projects in that area implementing these two protocols.
And this year, it has slowed a little bit down but still I find one or two new projects every month, just by searching on GitHub and GitLab and so on.
Then I looked at the project liveliness and I looked whether there was some kind much activity in the repositories or in the issue tracker in the last half year. From all the projects, of 32 are still active so there is something going on there, either development is going on or there is some issues not only posted but also being answered by the developers. And there are some 14 projects which are fallen dead, which is not necessarily bad, because it's Open Source and if someone wants to start implementing one of these new protocols and these projects can also lead us to starting point, create your sown stuff.
A few examples that have these two protocols built in, Firefox and Chrome, Curl is the command line, HTTP client, Tenta browser and appropriately identity are two browsers on the Android platform that have DoT implemented. On the system resolver side, which is then really part of the operating system and doing either DOH or DoT on the operating system level, system D resolved D which is a relatively new comment in the Linux that can do DNSSEC validation and can do DoT. However it is not enabled by default. It can be switched on. At some point of time the developer's plan to switch on DoT by default.
Then unwind, that's the next talk, I will go into detail on that, that's on open BSD and there is a resolver module for the Linux G lib, that has the nice Proxy‑Arp that it really hooks into the name resolution in Unix or Linux operating system and no other change is needed. The system D resolved D, that is the classics proxy all the system needs to send the DNS queries back to the look‑back address and the loop [address is running a full resolver.
There are client proxy and this is a piece of software that you install on your normal operating system and it speaks on the one side normal UDP classic DNS I would say and then it proxies this request offer DNS over TLS or DNS over HTTPS and there are a number of client proxies available. Some of these client proxies have also additional functionality, some do ad blocking, or load balancing between multiple upstream servers.
And then there is the server proxy. And a server proxy is the other way round. It terminates DNS over TLS or DNS over HTTPS and then sends the request to a traditional DNS server, like a BIND name server that at the moment doesn't do either of these protocols, but as I learned will do in the future, or a Microsoft DNS or an NSD or others.
And then there are already DNS servers that speak natively, one of these protocols, which is Unbound NOD and SDNS and these have these new protocols built in
What I found missing in this project is Dane, because especially one of the protocols I think it's DoT, the RFC talks about that either it can use X 509 certificates in the TLS with a normal trust chain through the certification authority, so the Internet public key infrastructure. Or, it can use Dane and have the certificate hashes in the DNS. Of course using Dane, the DNS based service in a DNS server to certify trust has kind of boot strap problems. I would also like seize kind of witness checks if a client proxy could send the same query to multiple upstream servers, wait for the answers and then compare the answers to make sure there is no of the providers of the upstream servers playing funny games and maybe changing the answers here, because whoever is using such a DNS resolver service in the Internet must trust that service not to change the data. And code security audits, I am not a developer. I do some programming and I look at some of the source code and I found some really not trustworthy. So, that's difficult for the normal user to decide then which project is good and which not.
Then it's maybe good to just stay with the usual systems, the projects that have been here for ages.
I found a rich software ecosystem for DoT and DOH, and users if they want to use this, can certainly find applications for their operating system, especially I found that everything that is written in GO easily compiles on Linux, Windows, Mac OS and the BSD operating systems without any problems. That is much easier than a software written in C which has the classic C compile chain with configure, Mac, Mac install because that needs to be twigged for example if you have software that is being developed on UNIX, for it to compile on Windows, it seems to be with go, that is not a problem at all, you just check out from GitHub, you compile on Windows and it just works even if the original developer never ever tested on Windows.
And operators can find several proxies to implement DoT or DOH in their infrastructure, and I would like to see some operators do that because I fear if that is not done, then all the customers or so. Customers DNS traffic will move to the Cloud and away.
AUDIENCE SPEAKER: Nikolay Lemon. I have a question, when looking on the different applications, have you checked whether on both things were implemented usually or how many of your implementations were using DoT and DOH? Because I would expect that DOH is more in the pure application layer like web browser and so on and DoT closer to operating system or...
CARSTEN STROTMANN: Actually no. The big application being the browsers. They do DOH which is the desktop browsers, so, Firefox and Chrome they implement DOH, the two other browsers which are based on Chrome, they implement DoT. And all the other proxies, most of them do ‑‑ other way, most of them do just one. A few do both. And from those who do just one, there is a little bit more DOH than DoT. But there is no clear picture that we can say DOH is just in the application and DoT is just in the operating system. You can find a lot of DOH proxies that run on the operations system level.
AUDIENCE SPEAKER: I would like to say that we are going to support DOH in Unbound, so, we have some financing for that, and we'll be implementing it in the second half of this year. Hopefully have a beta by the time of IETF Singapore, so ‑‑ and I think that Petter is now going to say something about knot resolver. Can he jump the queue and say what he wants to say? I think it's the same thing. Unbound supports DoT.
AUDIENCE SPEAKER: Petter. We have DoT implementation and also DOH implementation and we hate the DOH implementation so please don't use it.
CARSTEN STROTMANN: So why do you implement it then?
AUDIENCE SPEAKER: Geoff Huston, I am following up on a very quick comment you made about using Dane inside TLS gives you some kind of circularity issue I think you said. And I'm thinking in my head, I'd like to understand that because I don't think I agree. The fundamental basis of trust inside Dane is actually the KSK, and using a Dane based key on a TLS handshake would actually work in my mind and I was kind of interested to understand why you think there might be issues there.
CARSTEN STROTMANN: No, I probably didn't think straight in this case.
GEOFF HUSTON: In that case the only other comment I make is I think the implementations currently just borrow the handshake from Open SSL you get this CA mess and I think that's lazy programming implementers, I think in allowing the mess that is the web PKI to infect DoT is the first step to hell. Don't go there. Put Dane in, please.
CARSTEN STROTMANN: I must say the C based implementation they use either Open SSL or nut TLS but on the G systems they have their own implementations so they use that, it's not Open SSL.
GEOFF HUSTON: In that case it's more of a possibility of actually running Dane inside there as the point of authentication of the name that you are going to ‑‑
CARSTEN STROTMANN: Maybe.
GEOFF HUSTON: Would Unbound and Knot like to comment on whether they are going to go beyond TLS and actually use Dane rather than the web CAs?
AUDIENCE SPEAKER: So, my colleague William actually wrote some code to do that, and it was I think they did something for GIT DNS API to do the authentication on the stub side. There was an IETF draft to do ‑‑ to tag Dane stuff into the TLS handshake, but that sort of got torpedoed at the last minute.
There is some work on revisiting that, that is all I can say. But send William an e‑mail off line.
AUDIENCE SPEAKER: Brian Dixon, GoDaddy. Just a follow‑on point. Standardisation is not required to actually do implementation, so please implement. And that Dane gives you the ability on the client on the stub to boot‑strap the whole authentication process, validation of the certificates chain, you know, through the DNSSEC validation. And I'm a big believer in Dane, please do Dane, and big believer in DoT, encourage DoT. Thanks.
CARSTEN STROTMANN: I second that.
AUDIENCE SPEAKER: Tim. Thanks for your research. I'm quite amazed by the numbers of projects which are out there and here comes my question: Did you actually run all the projects and check if they worked? Because, just having contributions to the code doesn't mean maybe that it's actually doing something that makes sense?
CARSTEN STROTMANN: If you look on the implementation dot HTML it lists the projects and lists all the operating systems, and where there is an X, I tested it and it worked. At least at some point of time. Of course this is volatile, it's GitHub, it might break today. If there is a question mark, it means I haven't tested it. So, if anyone finds a project there where there is a question mark for some operating and you got it running, tell me and I can make an X there. If there is a minus, it means it doesn't work.
AUDIENCE SPEAKER: Right. Thanks for your efforts.
AUDIENCE SPEAKER: Matthias. IFC, so, yeah, BIND has also a DOH under a road map and I am not a big fan of DOH, so why do you implement it you asked Petter, so I think DOH is going on, right, your research shows that there is DOH projects and there is a major providers providing it, and I think a worst situation would be that the DNS operators the users of this, don't have a choice to turn it on if they want to. So that's why.
CARSTEN STROTMANN: Good point.
AUDIENCE SPEAKER: Hi. Eleanor for the RIPE NCC. I have a question from a remote participant. Christoph from the foundation for applied privacy would like to know: Do you have operational experience with any of these projects?
CARSTEN STROTMANN: Yes. Actually on my DoT DOH server I run ‑‑ I run ‑‑ first I run Unbound for DoT. And for DOH I have two of these Open Source projects that I switch back and forth and I have tested them. I think one was SDNS and the other one I can't remember which one, I have to look that up but yes I have tested them, I tested all of them, that an in the list and I have an X at least short, and the other one run on a real production server, at least Unbound and two of the others in go written and stuff, and I have a couple of the laptop machines that I carry around to conferences, and on every laptop I have one different project installed and work with that while I'm being in hotel N and other N to test it out if it breaks somewhere.
JOAO DAMAS: Any other questions then?
AUDIENCE SPEAKER: Christoph has another point, a second point. We will share some of our experience at the DNS heads Vienna meet up in Vienna.
CARSTEN STROTMANN: I would like to if that fits my schedule, yes. Send me an e‑mail. E‑mail is on my website.
JOAO DAMAS: Thank you Carsten and go on and tell us about the next one.
CARSTEN STROTMANN: Unwind is one of the projects that we have seen there. And it's a local DNS resolver aimed for laptops, for mobile devices which are currently running open BSD because it has been developed on open BBS D and it's currently work on a that but it's no reason why it shouldn't work on any other operating system. It just needs a little bit of tweaking there. It DNSSEC now and does transport encryption for DoT. There is no DOH, it has captive portal detection, so it can figure out whether you are behind captive portal in a hotel or in a cafe and if it detects that you are in a captive portal, it goes to the DHCP delivered first, that a the DNS resolvers of that network in order to get path the captive portal and once that is done and the way to the Internet is free and open, it then switches back to whatever is the preferred way of doing resolution. It has the very defensive design, it's using pledge and unveil, those are two technologies from open BSD, where processes can give us power when they run, so that this power cannot be misused. So even if unwind is starting as a privileged user, it then gives up these rights and also rights to call certain system calls.
The first release was part of open BSD 6.5, that has been released in April this year.
Unwind has been developed by Florian, he is a developer since 2012, it's not my work, it's Florian's work and Florian is also the author of many many good tools like these here.
This is the architecture. And it's not one process, not one that is running there, but it's actually four. There is a front end that talks to the network and receives DNS requests, there is the parent, which is the father of all the sub processes that takes care that everything is working and does the communication. There is the captive portal stuff that tries to figure out are we in a captive portal or not. And then there is the resolver that does the most of the work.
And there is privilege separation, each process running there has very little duties and is reply said. Means if there is a bug not the whole system is being component just the one component. So for example, the front end that talks to the network, usually can't write or read the operating system, the files there. So, this is the usually vector from the network, something coming in from the network, if the front end is compromised, does not necessarily mean that the attacker can then access the files on the drive.
The DNS resolver is main work is done with Lib Unbound, so it's not a completely new DNS resolver written from scratch because that is really, really, really ‑‑ what do I say? It's really hard to do to write a good resolver from scratch. So this is relying on Lib Unbound.
The captive portal checks, if there is a web server on some IP address or domain name in the Internet, and it checks the response coming back against some strain that is defined in the configuration file and if it is correct, then we are free. If something else comes back, then we are possibly behind the captive portal and the captive portal kicks in, the remote kicks in and then the DHCP supply of the resolver are being used.
Also the Unwind monitors the DNS resolution quality and the quality of the network and it it can switch between the different resolution modes dynamically, so it either can do direct recursion asking authoritative name servers in the Internet directly, or it can make use of any of the DHCP supply DNS resolvers or it can use a for word err over classic DNS which is DNS over UDP or what is use DNS over TLS and you can configure the preferred resolving strategy and the order, which order do you want to have that. So, for example, if you don't like that it is doing unencrypted DNS, that is classic DNS over UDP, you can remove that from the configuration file and just have DoT in there. And then it will either do DoT or nothing. Of course it can break, but that's your choice.
Unwind works nice without a configuration file. But if you give it a configuration file you can change the default. Like here in the example, this is. This defines a captive portal. It checks HTTP portal Firefox dotcom and the expected response it success, catch line feed. And if a different response comes back, then it thinks it's behind the captive portal and then we have a forwarder here configured and it does authentication, against the name DOH default DE and it does DoT on that one and the preference is to do DoT in this case and not do any other resolution.
There is a user space command line tool called "unwind control." And that can be used to a remote control running the Unwind process. For example an Unwind control status gives us the status whether this is a captive portal and how the validation is being done. In this case it's doing DoT because the little star is in front of DoT, but it could also do recursion or it could do the DHCP supply DHS resolvers. It works great if its first version but the work is not finished, there are plans to be able to also use DNS resolvers that are distributed by router advertisements. Support for Split Horizon DNS, whether some private DNS in a company that is not delegated from the Internet DNS that still works so that unwind can detect that and send some queries to the local name servers and other queries to the Internet. DNSSEC validation currently is opportunistic, meaning that it uses DNSSEC validation if it is there but if it breaks currently it falls back to not doing DNSSEC validation which is not optimal and that should change so that it does strict validation. And currently captive portal detection must be configured and the plan is to have a built in captive portal, URL that has been checked.
So if you want to have more info, install open BSD and read the man page. If you don't want to do that, there is a some implementation with more information from Florian from BSD Canada which was two weeks ago, and there is also an HTML of this presentation if you like.
AUDIENCE SPEAKER: Andrei: This reminds me a lot of very old project called DNSSEC Trigger, as far as I remember. And back, like, ten years ago, I was trying to use this project regularly, and I discovered some strange issues with trying to validate wild cards DNSSEC responses so I would maybe suggest as an opportunistic mode it doesn't matter because if it doesn't validate, it just goes through, so I would say it's far from optimal, I'd say it's usually useless in this case. So ‑‑ but in case of strict validation, this would make serious issues and it turns out that there are lots of broken zones in a way that they validate for standard queries and they don't validate for wild card queries, or even worse they don't validate for nonexistent wild card queries. So all those things should be somehow tested. Maybe it will be done better than it was back in the ‑‑
CARSTEN STROTMANN: Yeah, DNS trigger was based on Unbound. Unwind is based on Lib Unbound. Either it is already fixed in Unbound or else it has probably the same problem but yes, we will test it. Thank you.
AUDIENCE SPEAKER: Warren Kumari Google, apologies if I missed this but there is a DHCP option which tells you are behind a captive portal. I don't know if you watched for that and if not you might want to.
AUDIENCE SPEAKER: It's a DHCP thing that says this is a captive portal that lives over there.
CARSTEN STROTMANN: Oh okay.
AUDIENCE SPEAKER: And there is going to be an updated of that soon which is less ugly.
CARSTEN STROTMANN: Very good to know. I will forward that information to the author. Thank you.
JOAO DAMAS: Okay. Thank you very much Carsten.
ROLAND VAN RIJSWIJK‑DEIJ: I work for NLnet Labs. This is a bit of curiosity driven research, and I hope to entertain you for the next 15 minutes with that because it was sort of a big through some very, very detailed DNSSEC specifics called keytags. Very briefly introduction.
If you do DNSSEXY validation, you need to be able to match the keys that you need to validate the with the DNSKEYs that are in the zone. And to enable fast matching so that you don't have to check each and every signature against eve and every key there is the notion of keytags which is meant as way to help you find the right keys. These are 16‑bit values and they are only a hint right so you can't just rely on the key tag to find the correct key. But it can help you speed up your validation process.
Now, some years ago already, Roy Arins from ICANN presented at a DNS org meeting on the curious case of the unused keytags. Because it is a 16‑bit unsigned nimrod you would expect to use all 65,000536 POP values and it turned out if you randomly generate keys you don't use all keytags, in particular you use anywhere between 16384 and 32768 keys and this was due to the mathematical properties of RSA keys and how the keytags algorithm works. And with some generous support from the community, amongst other people they helped Roy with this, they could explain why certainly keytags do not occur in theory, but what we wanted to look at it whether they occur in practice or not. And maybe if we could draw some lessons from this about protocol design.
For those of of you who are a little bit more familiar with the details of DNSSEC. This is a quick refresher, this is the algorithm. And basically what you do is you do an accumulation of all the bytes in the R data of a DNSKEY record, and then you add up the lower 8 bits in the lower 8 bits of the number you add up the odd bytes and in the high 8 bits you do the ‑‑ it's the other way around, but you get the gist of it. Now, the outcome of this algorithm is of course highly related to the the information that is in the data because you are just basically adding up numbers. So the output will not be random at all if you do this. That's not necessarily a problem. But it also explained why certain keytags were not getting used if you randomly generate keys.
So, RSA keys have a lot of structure. I want to skip all of the details, but for example, if you take an RSA modulus, that is always an odd number because your multiplying two primes. If you use something called Safe Primes, you further reduce the search space, which limits the number of keytags that you can generate. And other things that are included in the calculation are the flags, the protocol version and the algorithm, which also mean unsatisfactory your data is already known beforehand. And like I said, either 16 K or 72k of the available 64k space is used for keytags. Right. (Calculation) we wanted to see what happened in practice, so I presented earlier this week in the Plenary session, and we have data on a DNSSEC. And so we looked at the data for the come and.nl in this case and what we wanted to look at is what happens in the wild, which keytags actually get used. And what we expected to find is that there will be ‑‑ so we want want to look at keytags here and what we expected to find is that certain keytags are much more common for RSA keys than others, that we do actually find key tag collisions in the wild, and that the currency of keytags for ECDSA would be much more randomly distributed because there is far less structure in the curve keys than there is in RSA keys, rather than elliptic particular curve key as it is used in the DNS is non distinguishable from something that go uniformly random in most cases.
Right. So, these are heat maps for all of the keytags that we found in the DoT come and.nl datasets, and I hope that I have convinced you that there is structure in this information, right. If this had been uniformly random, it would have looked like noise, but clearly it doesn't. And even you can see some of the purple banding in there, those are keytags that never occur in the wild. But what you can also see is that from the feeling of this, so this is 256 by 256 and you can compute the key tag from that, there this is a little receipt heat map of all of the keytags that exist, more the theoretical keytags are usually used in practice, so that's interesting.
Right. How is this distributed if you plot a histogram? You can see here for dotcom is that there is a lot of structure in there. Certain keytags occur much more frequently than others but there are also keytags here on the left‑hand side of the figure that only occur once in the whole dataset. The more frequently occurring ones occur 75 times more than some others.
If you look at.nl it's even more structured. You see three peaks in that graph, and those are probably related to certain key generation software and to the use of certain algorithms because the algorithm is also included in the key tag computation.
And remember that I said for ECDSA we expect to see something much more random because ECDSA keys are virtually indistinguishable from randomly uniform data. And as you can see that's the case. This looks like noise the which is, which means that it's much less likely that you get a keytags collision bus the keytags are much more distributed over the space.
And if you look at the histogram, this looks much more like a ghost indistribution, it isn't because of the long tail, but it looks like it, so it's pretty close to it. Okay.
So, what we did next was looking, go looking for actual collisions. And we did this in the.nl domain because that has by far the largest number of signed domains of any of the TLDs on the Internet. And we took three years of data, we took all of the RSA keys from that, with we found all of the unique RSA keys in there, or the the unique key sets in there and we computed the keytags for all of keys in those key sets and we looked if we found any collisions and we looked for two types of collisions what we call real collisionings where you have different keys that have the same algorithm and the same key size that compute to the same keytags, they are two different keys but they compute to the same keytag. Now, what that means for resolver is that a resolver that sees that will have to try both keys when it's validating a signature and there is a 50% chance of course that the first key is the right one but if it isn't it has to do an extra computation. And then we have what we call semi‑collisions is where you have there is two of the same keytags in a DNSKEY set. But the keys actually have a different size or even a different algorithm. Now, if they have a different algorithm, you should be fine, but if they have a different size, most resolver implementations don't actually look at the size and will still try to validate with all of the keys that match that keytag.
So, collisions over time, as you can see there are very few collisions, roughly 60 collisions in the dataset and about half of those are real collisions and the other half are semi‑collisions. So, it's not a lot of collision that we find in the dataset but we do actually find them.
So, how rare are collisions? Because of the way the keytag is computed and used and because of the way keys are generated you would expect the birthday paradox to apply. And that means that you can compute the theoretical probability of a collision occurring and on the right‑hand side I have actually put in the actual observed probability. So, the algorithm is actually doing slightly better than what you would theoretically expect. Now, one of the reasons for that may be that certain implementations, when they generate keys, actually contain a little mistake, that means if the same keytag gets generated twice the first key gets over written. The key set becomes unusable and they generate a new key set. So that may actually explain some of these numbers.
So does this have a real world impact? Because so far it's been a curiosity driven exercise. I really had fun as a researcher, what does this mean for the real world. Collisions can actually have a real impact because if you have to do extra cryptographic operation to, say, validate material, then you can actually have a resolver that has to do slightly more work for every domain for which a collision curse, now if this is a don't care domain that nobody actually ever goes to, that's not an issue. If this is a popular domain and you are forcing resolvers that have to resolve a lot of names this those domains to do extra validations, then this can have a real world impact. And I have had somebody, one of my students suggested to me that you could use this for a denial of services attack by generating a key set with lots of keys in it that all collide.
So that's that. The question is, of course, could we have done better? Because we wanted to see if we can learn something about this for protocol design. So, the keytag algorithm appears to be not completely optimal for a purpose because not the entire space that you have available is used for keytags. So, what would happen if we switched to something that has a random uniform out pull depending on the input. So something like a hash function. Now a hash function would be optimal for this but this is a very expensive algorithm to use if you ‑‑ the keytag algorithm itself is highly parallel eyesable. It's simple to implement an doesn't cost a lot of CPU time. What if we pick a middle ground and use something like C RC 16, it doesn't use a lot of CPU power and is easy to compute and gives awe much more random output.
Right. So, this is again keytags for dotcom with the existing algorithm. That is for exactly the same set of keys on the right is what would happen if you use C RC 16. Is looks completely random. Fantastic. So uniform, such random. This is the distribution. That looks almost like the Gaussian distribution, again it isn't, I did some maths on it but it is pretty randomly distributed. Right. So job done. We should change the algorithm. Great.
Is it actually better? It turns out it isn't. We get more collisions. Now this is partly due to the fact that some key generation systems actually solve the problem of the collisions by not having any collisions because of a bug. But, actually, because this is random uniform distributed, this follows the birthday paradox. So the observed probability almost matches the theoretical probability. There are some slight discrepancies that is due to the discrete nature of the dataset and you should ignore the lower two because there is not enough data to get you a good in but for two and three keys in the key set the probabilities are almost what you would expect to see if it was randomly uniformly distributed and that is higher than what the current algorithm does.
Open questions, like I said the unspoken assumption for the empirical data that I'm presenting here is that somebody is not already filtering out collisions, and it is like law that some of that will be happening.
For example, LDNS, if you use LDNS to generate your keys it will over right the old keys so you will automatically drop collisions. BIND has the same thing. Another question that I have is can we fingerprint Crypto libraries that are used to generate keys based on the keytag space that we have? Because Roy actually figured out that he could fingerprint Open SSL from that, which is interesting.
So, I'm almost ‑‑ my time is almost up. What did we learn? At first glance the original algorithm seemed suboptimal because really if you're picking something that is why not go for something that is randomly distributed across the space? Why have something that has so much structure in its output? But if we chose something better, we're not actually doing better. So, it would be a shame to go through all of the trouble to change the protocol just to make it worse.
And in Dutch we have got a phrase for it that the people that designed the original algorithm were unknowingly capable. They picked something that turned out to work. But I don't think this was ‑‑ so I haven't asked if somebody here actually contributed to the original algorithm, I would like to know whether any of these was considered, whether any of the considerations were about well are we going to distribute across the output space or not? And I don't think that was actually considered because I looked through the history of the RFCs and didn't find any evidence of it.
Now, if you are an operator or an implementer, because a collision can actually have an impact on validators, why not generate a new key set if you find there is a collision? Because what the math shows, if you look at the slides is that the probability that you get another collision if you generate another key set is almost negligible. And you are actually helping validators out there. So why not do that. And especially if you have a popular zone, this is worth doing.
With that, if there are any questions, let me know. Hey warn.
AUDIENCE SPEAKER: Warren Kumari, Google. So if you go back to one of the slides with the RSA pictures. Some of that I believe is caused by the fact that OpenSSL and Ganew TLS generate their keys differently to the way ‑‑ Open SSL and Ganew set the least significant bit because the primes have to be prime. And they also set the two most significant bits because it's a little optimisation, they know if they multiply the primes together it the always have the Hi bits set in the products that causes at least some of the difference. Also you can quite easily fingerprint what library SSL library somebody is useing from that.
The other thing is, I'm not sure if it's actually written down but there is an agreement with the connection, the root KSK, if it ever generates a keytag that has on used before they are going to throw it away.
ROLAND VAN RIJSWIJK‑DEIJ: And there are TLDs that do the same but there are also TLDs that don't.
AUDIENCE SPEAKER: Hi. Shane Kerr from Oracle Dyn. I thought the same thing that you mentioned about DDoS, or denial service possibility for putting lots of DNS keys with the same keytag, but then I thought there is plenty of other ways that you can design a zone which will cripple resolvers like switch to a curve or put a high number of iterations and things like that. Maybe we need to start a list of ways you can break resolvers with DNSSEC.
ROLAND VAN RIJSWIJK‑DEIJ: That's a fair point. There are other ways in which you could overburden a resolver. But I think it's actually, it is a relevant point that there are different ways to do this because it means you have to protect against all of them. And this is actually ‑‑ my student did a bit of math on this, this is viable to do this. It might take you a bit of time to generate all of the keys and get more collisions. But of course, you only have to do that once as an attacker, you could just reuse that key set. You have to be a little bit patient and the more collisions you want in your key set the more patient you have to be. But it's certainly not impossible and the collisions already occur in the wild without any mal‑intent.
JOAO DAMAS: So, change of roles here. As you might be aware, because of the high volume of reputation of the talks about the KSK resolver over the last three or four years, we just recently ‑‑ ICANN just recently rolled the root KSK but since we have having this for a long time we thought it would be good to do a wrap‑up so that everyone is aware of what's going on.
So first, in case you forgot, is an explanation of why did we have to roll the key at all, and it wasn't definitely because of any operational problems or compromises of the key or anything like that. It was basically because article 6.5 of the ICANN DNS practice statement stated what's written there, that the key would be changed whenever it was required or after five years of operation. Even that triggered some discussion, because after five years for some people meant at the five year point and for others it meant sometime after five years had elapsed which is actually what happened.
So, a sick summary of the timeline. There was a first attempt after many many discussions to roll the key on 11 October 2017. It was just before that was about to happen, there was some signals from measurements that triggered some concerns that things might be going wrong. So that the roll over was postponed. It turned out it was postponed for exactly one year. During this one year period the new KSK, the one that is currently used, actually remained available in the route zone at the same time as the old one, so it was a bit of a big response from some queries there. So last October, the key was finally rolled and this last January, a few months ago, the key, the old key was revoked. Both these things actually triggered some incidents around the network.
Some observations that you have put forward as this process went along. Well currently, even after all the discussions and back and forth and trying to observe what was going to happen and predict what would be happening, there is still today a shortage of reliable data regarding what is available. One of the drafts that came ‑‑ one of the RFC that was originally published with the intent of providing this data gave some unclear signals. There was a further evolution of these called the sentinel but this actually was published as an RFC after the key rollover was done so it will get deployed in time to actually provide information for this roll over. We'll see about the next, if this is one next.
Contrary to what is a general claim, not everything actually went as smoothly. And one of the example that is people cite is the ISP in Ireland, Eircom, that suffered a big outage on their broadband customers. Now, this also triggers a new avenue of discussion. This ISP represents about 20% of Ireland's Internet users. All together, though, from a whole Internet point of view it's 0.022%. I think it's some discussion that needs to happen about the threshold for "no problems happen." Because I don't think the Irish people were particularly happy when it happened even if they only show up as.02% of the Internet.
So, what now? Now that the process is complete. It's not like the DNSSEC people are going to stand, still right, they never do. So, I would like to point you to a recent presentation by Hoffman on the ICANN symposium, there is a lot of detail there about what was observed and what the open questions are. There is also a mailing list that has been set up by the name of connection rollover, you can go to that URL that's there. And this is the designated place to have a common space to discuss what to do next, if anything.
So, feel free to contribute. Give an opinion. Your opinion is as good as anyone else's. Just a list of things that have already been put forward. For instance, do we need another roll? If we do, what would be the frequency of those future rolls? And when? Should we wait five years? Do it sooner? Later? And of course as soon as you ask for questions, we being cats, like we are, open all sorts of avenues of discussion. So there are also questions about should we keep the algorithm? Should we keep the size? Should we use something different?
So with that, that's the overview. Things are still working. We had a few hiccups, but there is a need to think what to do next. And I wanted to be ‑‑ we wanted to make you all aware of this discussion. And if you have anything to contribute or ask, feel free, there are plenty of people here who can answer.
No, okay ‑‑
ROLAND VAN RIJSWIJK‑DEIJ: I have a question for the K‑root people in the room. I don't know if Anand will want to comment. The ‑‑ one of the things that halved after the key was revoked is there was a huge up take in DNS queries to the root up to a point where about 10% of the queries to the route from DNS queries and that is a significant number. And I was wondering, I haven't really heard ‑‑ I have heard from some root operators that they saw this, that their systems could manage because they have the capacity etc., and I was wondering with whether somebody from K‑root wanted to comment on that what was that like for K‑root because I looked at the data and it was about 8% of the traffic was DNSKEY queries, so...
ANAND BUDDHDEV: Yes, so K‑root, like some of the other route servers, also saw an increase in DNSKEY traffic. However, our operations were not affected by this. We have enough capacity and this was not anywhere near, you know, capacity or anything. We ourselves haven't done any analysis on this. Some of the others have. I think some people have seen presentations by ‑‑ so, but, yes, we did see that, but it did not affect operations.
JOAO DAMAS: Before you sit down Anand, the results that Duane presented he was looking at A and J root, they didn't show the rate abating at all.
ROLAND VAN RIJSWIJK‑DEIJ: After the key was removed from the zone, so after the revocation was done, the rate dropped back to pre‑‑‑
JOAO DAMAS: Then it started going up again.
ROLAND VAN RIJSWIJK‑DEIJ: And I notice how I talked to Duane on a regular basis, it's not going up dramatically as far as I know.
Can I ask Anand a follow‑up question? If you don't want to answer, will let me know. But I would wonder at what point would of this become problematic for you as a root operator? Can you comment on that? Because these were very large responses, so the amount of traffic coming out of the root, while maybe the query log was okay, the amount of traffic started deviating ‑‑ I would expect it to start deviateing significantly from normal levels because of the size of the response. Could you comment on that?
ANAND BUDDHDEV: Well, I can tell you that we have monitoring in place for our traffic, and it starts alerting us when the router ports are at 80%‑ish, so ‑‑ but yeah, so you know then we would be getting worried, but we did not reach any such levels.
AUDIENCE SPEAKER: Laurus Lehmann from NetNod. You have to remember I believe ‑‑ I think it's a couple of months since I saw that he thinks were, but I think what we are referring to is the rise in the queries for DNSKEY records. That is very small fraction of the entire number of queries. In the big numbers of things, these are very small variations, so ‑‑ and the big numbers of things is still a very small portion of the total capacity of the system. So, this was barely noticeable on the big charts.
AUDIENCE SPEAKER: Warren Kumari: So yeah we also saw I think all of the route servers saw it. Yes, it wasn't that much traffic in the grand scheme of things but it continued to climb, and initially we had no idea why. And at one point it could have ended up being a hundred percent of the route server traffic at which time we would have definitely started panicking. But yeah, I mean there is enough head room to deal with DoS, although any additional traffic takes away from available DoS capacity. So there was some concern. We think that we know what it was, we say has shared that. But it still seems to be an active bug in certain versions of code which have been released:
AUDIENCE SPEAKER: Roland ‑‑ while you think a lot, that 10% is a lot. Public statistics from the route servers operators say that in the last 16 months normal traffic to them has doubled, okay. So this is nothing.
ROLAND VAN RIJSWIJK‑DEIJ: That's very reassuring.
AUDIENCE SPEAKER: Jim Reid. A slight change of talk because of the issue of the next key roll over, something I have made a point of before and I'm just going to repeat it. We should try to get where these key repositories outside the United States and I think it's very appropriate making that point when we're here in Iceland because if you remember about five years ago when that volcano erupted in north Atlantic travel was stopped and if we have another similar incidence like that it would be hard to get the trusted representatives together to get a key signing ceremony. I think it would be good if ICANN could sort out to get a some key repository outside of the United States just to give us the additional diversity.
JOAO DAMAS: You can have it simultaneously in North America and Europe. Thank you very much. Dave on a summary with their work.
DAVE KNIGHT: I am Dave Knight. I work at Neustar. I am also a member of the OARC Programme Committee. In the past we have done updates from IETF. And we thought because this meeting and the accompanying can IDS meeting were quite far away probably a lot less people from our community went out there, we'd do an update from that. So, this was build as an update from the OARC meeting.
But I decided to also include a summary of notes from the IDS meeting too. So this was about a week ago, ICANN had a series of meetings in Bangkok, one of which was the DNS symposium which I went to and the OARC meeting was co‑located in the same venue. And across these two meetings, each of which were two days, and there were 47 presentations, five lightning talks and two panel discussions, and surprisingly, no talks were duplicated between those meetings and none of them showed up here either. So there is a lot of stuff to get through. And everything was really great. So I have written a summary of all 47 things which I will now laboriously relate to you.
Unfortunately the good discussion that we just had has pushed us a bit over time so I am going to skip very rapidly through this to the end, and those of you who are interested in having that summary can download it from the web page and I will instead say, it was all really good. You should check it out. Here is links to the agendas and you will find the webcast archive and the slide decks and on the back of that, Keith has asked me to say a couple of notes about upcoming OARC things.
It's going to be held in Austin Texas in October over Halloween, next to NANOG, and next year, OARC is going to be changing to doing three meetings a year, so it will continue to have its two, two day meeting but it's going to add a one‑day meeting earlier in the year in February. And the intention is to more ‑‑ going forward, to more routinely co‑locate these with NANOG, RIPE and the ICANN DNS symposium.
And that's it for my summary. Thank you.
I was going to assume there is no questions.
AUDIENCE SPEAKER: Brian Dixon. Since you are on the Programme Committee, I was wondering if you have a comment on the off by one between the numbers of the DNS OARC and the Indico.
DAVE KNIGHT: That is a good question because between the last two meetings I wrote a script to pull the presentations down from the web page to the presentation laptop. And it was quite confused when I showed up and found that off by one for myself. I have no idea why that happened.
AUDIENCE SPEAKER: Hi, Denis from DNS org. I just want to give some details about the meetings next year. So the one in February is going to be beside NANOG in San Francisco and the one in May is going to be beside the ICANN DNS symposium. I know where it is but I'm not sure where I can actually say whereabouts it's going to be because it's not confirmed yet but it should be in Europe.
AUDIENCE SPEAKER: Matthias, I just want to say best summary ever.
STENOGRAPHER: I agree. You should all learn from this. My favourite speaker to date.
JOAO DAMAS: Unless anyone has any other business of course.
AUDIENCE SPEAKER: Actually I do have one other business, and that is I must also say, best stenographers ever.
STENOGRAPHER: Thanks. I am bowing! I just can't stand up.
JOAO DAMAS: So, just in closing, remember that for those that are going to the RIPE dinner, that is tonight. And at 6 p.m. in this same room, there is going sob tomorrow broker on broker action so if you feel like it, get some POP corn and sit in the back, it should be entertaining. Thank you very much. See you at the next RIPE meeting.
LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC