Open Source Working Group
Wednesday 22 May 2019
2 p.m.:
MARTIN WINTER: Good afternoon everyone, you are here at the Open Source Working Group. My name is Martin Winter, and this is Ondrej Philip, we are the chairs of the Working Group.
ONDREJ FILIP: I think the people involved in the meeting did the great job because they put us just after the lurch. The normal boring common Working Group, the people are sleepy after lunch, they are not so excited but this is the most popular Working Group ever at RIPE meeting so I'm sure you'll be excited and awake. So welcome to the Open Source Working Group. Martin is going to introduce the agenda DHCP.
MARTIN WINTER: Okay. So, let's go to the agenda we got together today. With we have a few administrative matters first. After words we have some interesting discussion by Wolfgang and Sander about Open Source space Lapse which PE put stuff together by Open Source, interesting labs what they can do. We go into the high performance traffic encryption, a lot of cool Open Source stuff in there too. That's by Max there. And afterwards we have like one very last minute lightning talk which is put in there which is were updates on the Bird, and that will be Maria doing that. And we do actually, AOB is anything at that time and at the very end we have kind of like open ended in case you run a bit over, we have an interesting round table discussion will be Charles and Mirjam talking about it hackathons and basically on the communities, the impact or something, so there will be an open and interesting discussion. It's less about the results. We're not talking about that part bits more about the things that are organised the goals behind it and all that part.
ONDREJ FILIP: Okay. So, that was the welcome, and are there any comments to the agenda or can we take the agenda as approved? Thank you. So, agenda approved. Also, one more administrative thing. The minutes from the last meeting were published on the website. We hadn't received any comments, so at the last moment you can comment on those, otherwise they will be finalised and approved. I don't see anybody, so, perfect.
And the last administrative thing actually, one more. We have two volunteers or sort of volunteers who are helping us, that's Anand, who is scribing and we have Romey who is monitoring Jabber, thank you very much. And of course, if you want to comment, don't forget your name and affiliation, comments are more than welcome. Unlike the Connect Working Group we do not promise you any free alcohol for comments, we are not so rich, but anyway you are more than welcome to come to the mic and speech. So thank you very much.
CHAIR: With that, let's ask Wolfgang up, he will start with the Open Source Docker based.
WOLFGAN TREMMEL: Hello. I am Wolfgang Tremmel I run the DE‑CIX academy. So what's that? DE‑CIX academy is providing training regarding peering and using our services to customers and to the let's say the general public. Everything I do at the moment it's free, it's open, everybody can join and watch me. And I started this two years ago, and if you start from scratch, you have to generate content. So what I did is I set up a series of webinars, so at the end I have now eight webinars around BGP and then I had the idea let's turn these webinars into a classroom seminar. And if you do a classroom training, of course you wanted to have your students also have some practical experience. So, the task I had was to create a BGP router lab, and it doesn't ‑‑ it didn't really matter which router because I wanted to teach generic BGP. The number of participants, it's usually unknown, it's up to 20 let's say, and the trainees should be able to set up routers, connect them to each other, do peering, do set up BGP sessions with each other and basically any kind of experiments you need to teach someone BGP.
And for the trainer, it's really important a fast and easy switching between the experiment. If you are standing in front of a class, you do not want to have to fiddle around with details like how to set up the next experiment. So, what kind of experiments I would like to do.
The first thing is iBGP related, I want to have a ring of routers, each participants get one RARE. They are physically connected in the ring. Everybody gets IPv4 and IPv6 addresses, and they set up an iBGP, like OSPF or ISIS and they set up iBGP.
Second experiment I would like to do is eBGP related, so, the dark blue routers are the routers of the students and the light blue routers are basically provided by the lab itself. Everybody gets two upstreams and they get peering with each other. And of course we have 2019. So, it's IPv4 and IPv6.
This was the task, so what options did I have to achieve that. Of course (eBGP) there is GNS 3, everybody knows that, Sander is going to tell you a lot more about GNS 3 in the next presentation. It's a fantastic network emulation environment. You can do basically everything you would like to do. You can emulate every router given if you have the software. But, it's kind of resource hungry and that's up to 20 participants you need quite a back end for that. Not really what I need.
Then, raspberry PIs are running Quagga or routing, I actually tried that. It works but the risk here is that you have to spend too much time to figure out why a certain raspberry pie suddenly isn't booting any more or if the the memory card has been wrecked or so. So there is a certain amount of risk here but I could achieve my task with that.
Then let's use virtualisation. Do I really need this? No, I just want to emulate one tiny little router.
So, do I really need Cisco or Juniper if I want to teach generic BGP? Also, no. There is plenty of Open Source router software out there. I had a look at Quagga, at Bird, and I suddenly ‑‑ I settled with frrouting because it has a nice user interface, most people perhaps have worked with Cisco before, so, frrouting is quite easy if you have some command line experience.
And how do connect that. Well, have you heard about Docker? Yes, I guess you have. Docker is a lightweight virtualisation method, and you can use a lot of automation using part of Docker called Docker compose, so you can pre‑define your network, pre‑define your setup, you can scale it to the number of participants because if you want to build a ring for your training, it's a difference if there are 10, 11 or 12 students in your training, so you feed to switch between the number of instances you're emulating. And there is already a Docker file in frrouting, so thank you for that, and on that I could build. I have added ttyd so, my participants they don't have to use a special software, a web browser is enough. So I get the idea from the RIPE NCC guys to have basically the command line in the browser, and for that is ttyd a good solution, and IPv6. I spent the whole week setting up IPv6 inside Docker‑Compose, and Docker, and I have no idea why the Docker guys made that so hard for me. So... but I managed.
So, for everybody who has seen Docker before, this is the Docker file. It's pretty simple. So, basically, it takes the frrouting Docker and adds a couple of instances to it. It makes the configuration directory accessible from inside and outside the Docker so the trainer can see what the students have saved. The trainer can provide example solutions. And I add a couple of files to get the whole thing started. And yes, it's pretty ‑‑ a little bit ugly, but it works. And then I used Docker suppose. Docker suppose basically gives the framework, how the Docker containers act together, so, I'm using a script here to generate all the config files as text files, and I give it the script here and what you see that the number of students and it generates my Docker‑Compose file. It generates, if I need X I iBGP, I'm using that for the host based routers and then I just fire up the whole thing. And that's the Docker‑Compose file that's how it looks like. And the hardest part here again was also trying to figure out how to integrate IPv6, the documentation of Docker and the reality sometimes differ a bit.
So, that's the way. One router here sets up, in this case, networks for upstream for peering and also an administrative network and enables IPv6 on the net. And the participants of the training simply, they get their web interface, each gets one router and the physical network is done by Docker and they can do BGP with each other.
Everything, as I said, it's Open Source. It's downloadable, it's on bit packet, so if you want to clone it, just go ahead. Also, if you want to see what I'm doing, the videos I did about BGP, there is also the link for that, and this is my solution for a classroom based BGP training based on Open Source software. And we agree that we take questions together at the end. So, I'd now like to hand over to Sander, who is presenting his solution for router trainings.
SANDER STEFFANN: So, yes, we were talking about these labs for a while, and we have taken completely different approaches to this. Because, I was asked by ISOC to develop a lab for MANRS, but it had to be a lab that people could do from home unsupervised and just could start on their own and run by themselves. But it also could be used remotely or locally with a teacher. So, completely different design constraints, and therefore, also a completely different design.
So, when they asked me like can you build this training lab for MANRS? I was like, well that seems like a bit of a waste, just developing a whole training lab for one specific project, is a bit of a waste of time, so I was like well if we do this let's make it in such a way that we can use it more, and we talked to ISOC and said, well, if you make this and we make it flexible, we release it as Open Source, so everybody can use it, RIPE can use it, ISOC can use it, so we tried to look a bit further than just the initial application of the lab.
So, what are the goals? Well, they explicitly wanted to have different vendors, so they said we have to be able to do the lab on Cisco, Juniper, Mikrotik, all kinds of platforms, so these are the ones we started with, but basically in the lab, like Wolfgang said, we are using a GNS 3, so everything that runs in GNS 3 can be used in a lab. So if we want to do Bird or FRR or anything, all of the them are possible.
We also wanted, because people should be able to do this from home, we wanted direct feedback. So, the system would have to be able to give feedback to the student about whether their solution of correct or not. So, this was actually quite a tricky part. And we also wanted to teach students how to use the IRR, to show them like how to create row objects, how to query the database how to build your filters on that, things like that. Like I said, it's option you can send a time limit for the lab, you can say you can do this and you have two hours to complete it. But, when there is a teacher, they should be able to extend the time limit, pause the lab, restart the lab, export the lab so people can take the config home. And also for the teacher to be able to see what the student is doing. So, we actually did that by just giving the teacher access to all the student dashboard, so, they can basically exactly see what the student sees and the session through the router is shared between them so they can help each other and see what they are doing.
So, to build this, what choices did we make? So we used GNS 3 for the virtual lab, it can emulate a lot of things, you can scale it across multiple compute nodes, if I want to build big labs. We used Django owe for the front end. The interface uses some for the life updates for the web based Telnet, things like that. Red I say is used for the communication between the Django system and the web browser, basically as a proxy for the web socket. And we used post S X L for the permanent storage.
Now this needs quite some hardware. Every student gets their own lab, everybody gets a complete clone of the template so that the teacher has a template, each student gets a clone. And if you look at what a virtual router uses, that's quite an a lot. A virtual Cisco IOS is do believe but once you start going with IOS XR routers and stuff like that they need a lot of RAM per box. Juniper needs even more because they actually have properly implement this had as a proper control plane and forwarding plane. But you're talking about every control plane needs 1 CP, and 1 gig of RAM and needs more. So, multiply that by 30 students and you need a really big box.
So, these are some of the down sides of this. So how does the work flow? A lab is built in GNS 3, you can use the normal GNS 3 graphical user interface, build the lab and we made some components that do the report back to the management system, which prints the receive which BGP routes they receive and you can configure them to announce BGP routes as well. What we get is the opportunity has the device in the middle and with we put some of those monitoring nodes around it. They announce BGP routes, they see what BGP routes are received and that way, we can observe whether the student is filtering the routes correctly according to whatever is in the IRR or in the exercise.
Same with the pings, all the nodes send pinning from fake addresses from real addresses and of course if the exercise things like make sure there is proper service address validation for your customers, and the lab can test that.
So, once the GNS thing is set up, you link it to the management system. You configure the instructions, the goals, you say okay, this is what the end of the exercise should look like. And then each student gets their own clone and can do their own exercise until they are done.
So, this is for example from the MANRS documentation. This is the example that's used in the MANRS implementation guide. So, we just rebuilt exactly the same network in GNS 3, with the same IP addresses, the similar links, the same AS numbers to make it easy for people to understand what they're doing.
Then this is the admin web interface, so on the right you see the image from GNS 3, that's the nicely imported and rendered in the management interface.
On the left you can see like this oh router is a motoring router, it should see this, this is IRR, you should see these objects and by configuring that you set the end goals.
So, this is an example of what one modern node you had should see. You put some instructions in there. So, this is text just marked down that the student will see in the front end. And then there is a set of these pings should be received, source and destinations, we're using Bird on the X nodes, so we just ‑‑ you copy and paste the routing table as seen by BIRD in the system and say this should be the end result and then the front end would Coombely compare what the student is doing against what's in here.
This is what the student sees some mark down that's nicely rendered with instructions, they see the lab and they can click on anyNode to watch the status. There is a number of tabs here at the top, so, the first one, the black one is the node that they are working on, and the red ones are actually the monitoring nodes and the IRR node around it, and they are red because they are still seeing things that shouldn't be there or they are not seeing things that should be there. Other way around, you know what I mean.
So, the console is integrated like this, you have a fully functional external basically in JavaScript, and the result looks like this, in this case we're looking at this node which that one, and you can see okay, we expected to see a ping from that address to that address and we're seeing it, so it's green. But we're also seeing some pings that shouldn't be there. Or we're missing some pings that should be there. So that way the student can see what's wrong with their current configuration and they can work towards the right solution.
Now, diving a bit into the infrastructure of the project, like I said it's a bit more complex than what Wolfgang said. So this is the overall picture. This is the back end basically. So that the teacher has the open VPN clients with a Telnet connection and the GNS 3 client. And on the server side we have the open VPN server with a GNS 3 server with a lab instance with all the different nodes in it. And the teacher can just Telnet all of these nodes and control and change whatever they want. So, the teacher has full access to the raw lab.
If you look at the management system, we have the GNS 3 server with all the instances and they all connect to the different services in the, that are managed by WSGI manager, and there is the web servers that actually controls the GNS 3 server, there is the Telnet relay that relays the Telnet connection to the node that the student is supposed to work on. And for all of these we actually use a virtual seriously connection, so, all of these just dump data to TTY 0, TTY S1 and that's all received over a network connection by the system and stored to compare to the actual results. And then for storage we have a web server, the web page come from Django, there is a module that forwards the web socket data, red is used as a relay there; ghost is used as a storage. That's the whole project. The student comes in and does everything over web and this is basically the thing that we found that controls GNS 3 that proxies the Telnet connections and makes everything work.
So, what are we going to work on in the future? We want other routers, Junipers working on the C respected. I think it should be release with 19.2, you get a Docker container that contains this is the full CLI config management and routing daemons of Junos, you don't get all the fancy features of the forwarding plane but it can programme, for example a Linux kernel, so you actually get a Juniper control interface on top of the Linux kernel. That can run in a quarter of a gig or half a gig, so that scales a lot better than the multi‑gig clones you need for the full Junos VMX. We are talking about ALCATEL Nokia about running their routers in here, we have been been talking to people about licences which is the hardest part of this, because if you want to run this and you are running a real image, you do need to talk to the vendors because those are commercial projects, so you need licences. That bit is not Open Source.
The next thing we're going to add is probably RPKI, so there will be an RPKI validator, and we had some performance problems with IRR nodes, so we should look at that because the whole lab worked pretty well, except the IRR node which should actually be the simplest, so I think there is a bug there that we're working on.
And that's a quick, a very quick, overview of what we have been building.
(Applause)
ONDREJ FILIP: Are there any questions? Okay. Then thank you very much. Sorry, there is one.
AUDIENCE SPEAKER: Ed Delongey from inc.zn, just out of interest what hardware would be using for like a 20 person lab?
WOLFGAN TREMMEL: I was running it on an internook box. I had some problems with X iBGP that's the reason I was going to a generate a virtual server but not much. The goal is really to run it on an inter knock which I can put into my pocket and take along.
SANDER STEFFANN: The lab I was reasoning, we had one live test with this, it was running on a Dell R 710 with I think 32 gigs of memory and 8 CPU cores. And with 30 students it was struggling. So, that needs ‑‑ that needs a bit more, so more modern CPUs and especially more memory.
AUDIENCE SPEAKER: Not really a question, but eager to try this out. Wonderful. It looks splendid.
SANDER STEFFANN: My code is on the ‑‑ I just realised I didn't put the URL there. It's on GitHub under the MANRS tools organisation. So you can find it on GitHub.
AUDIENCE SPEAKER: Okay. Thank you both. (Wonderful)
ONDREJ FILIP: Okay. Thank you very much.
(Applause)
The next speaker is Max.
MAX ROTTENKOLBER: So, high performance traffic encryption on X86‑64. Hi everybody. My name is Max, I am an Open Source hacker. I have been working on the Snabb project since 2014. More on the Snabb project later in this talk, just note that I am also consulting on software networking and user space, also known as kernel by passing networking, and as well as protocols software optimisation, etc.
So, for the last couple of years, I have been working on a project called Vita. Vita is a high prefers side to side VPN gateway and it's 100% Open Source, it's Hackable and by that I mean it has a small minimal modular code base that should be easy to understand. And it runs on generic X86‑64 server CPUs. So, Vita is based on Snabb. Snabb is a software framework for high performance network in user space, again kernel bypass networking we are running completely off the Lua kernel here. And a number use cases is presented at the last RIPE meeting in Amsterdam, if you want to know more about Snabb I can refer you to his talk.
Right. So Snabb applications, including Vita, as well as Snabb itself and that part is important, are written in the high level projecting landing Lua. So, we use a super fast implementation of Lua called RaptorJIT which is geared towards heavy duty server application, and this is a. Lastly, I want to know that Vita was made possible by a generous promotion from the IANA foundation. If you don't check them out, they are really cool.
So, while we hersing this presentation, I realised that the code snippets that follow later on might give you the wrong impression because I tend to go into the gritty implementation details. I wanted to start off by showing a more typical example of Snabb code just to give you a quick run down.
So generally Snabb applications are organised into modules or apps as we call them, they are process packets in a simple small loop. So, in this basic example here, we received packets on the input link, and forward them onto the output link unless the T G L has expired, and if a packet CT L is zero we forward to the time link as well where it would be handled by the next app, like an ICMP app that would generate a message accordingly.
So, what do I mean when I say high performance? So, Vita can at the moment terminate 3 million packets per second on a single CPU core of a modern CPU. That translates to about 5 gig pits of traffic per call, it does scale linear with cost by the way. So, also I should note this is full duplex, this is 6 million packets being processed per second in total. So, interpolated I think this should be mean we should be able to terminate 100 GL line rate object a 50 core box, something like that. We'll see about that.
So, how does Vita do it? ? Snabb land we like to write soft that both fast and simple. We believe that simplicity translates to efficiency and that programmes don't need to be complex. We also avoid vendor locking whenever possible. For Vita this means specifically it doesn't use any proprietary tree extenses or Crypto cards, it runs on bare X86‑64 Linux servers.
So, Vita most obvious cost factor in terms of its budgets is crunching numbers by encryption decryption. For that we rely on these, which are supported by recent Intel named CPUs like. AES NI provides instructions for AES encryption specifically. AES 2 on on the other hand is the fourth generation of intel single instruction multiple data extension. So, we take this an optimised AES GCM implementation based on those extensions, that's really really fast, so modern CPU core is encrypt or deCRYPT for than 20 gigabits per second. This is written in a dynamic seam builder. And below here there is an excerpt of some of the best code, you don't expect to understand, it's just to show you how we can in line assembly into Lua code and this is using the AV X 2 instructions by the way.
So, for route look ups, we have an optimised Poptrie implementation. The lookup routine is again using DNSM, everything else about this implementation is very high level Lua code, and for example, this to admit code depending on parameters and CPU features at Runtime. Again this is an excerpt of the code. I do not expect to you understand it, it's just to show you how we can kind of make a programme assembly using Lua. And both the Poptrie and AES implementations upstream if the Snabb, so if you have a problem that you want to solve using these things you are free to do so. They are like ready.
So, further Vita uses simple and fast implementation of IP secretary ESP, that issen collapse laying security pay loud. This is possible and demeanor on the slide we have example definitions of the ESP header and trailer which I expressed at C structures, and Lua FFI let's us access these as if they were native Lua objects.
So, the way ESP handles security associations presents some constraints with regard to synchronisation, for important reasons, every packet in ESP has its own unique sequence number, that's increasing continuously. So, if we were to process a single security execution on multiple cause, we would end up with tricky synchronisation problem and this is generally costly, and doesn't really scale well.
As I mentioned though, Vita does scale. So the more is added the faster it should run and every core that's added should lead to a linear improvement in performance, so, we do this by imitating a scale architecture internally. We use two common features for this. The first RSS allows us to distribute flows received in the private interface on to separate security associations for each work you. VM G X, the second one, which is originally a virtualisation technology, let's us aggregate the separate inbound security on the public interface before forwarding them onto the private interface. Now, on this slide there is a high level overview of this architecture, you can see Q1 and Q 2 which run on separate cores and on the left we have the private interface which ounce in RSS mode an on the right we have the public phase which runs in VMDQ mode and it has two public addresses one for each Q, so thanks to the Nick off loading, the work queues only ever see their individual chunk of the traffic. So they do negotiate the associations completely independently of each on the public interface and ‑‑ they don't need to synchronise.
So, Vita uses IP sec ESP, and ESP is mostly standardised with all the features you want. I'm only going to complain about one thing though. So, ESP was originally standardized with 32‑bit sequence numbers. And you can probably guess what happened next. So, they realised that 32‑bit weren't enough in the modern age of high speed networking and introduced extended extended 64 bit numbers, presumably for backwards compatibility they decided against updating these in the header though. So, instead, they decided that signers would only transmit the lower half of the sequence number and the receiver would guess the rest. So, given the theoretical possibility that send and receive lose synchronisation of the sequence number state the standardized tricky algorithm that helps the receiver to catch up basically. So, this is literally the receiver trying possible sequence number candidates, trying to decrypt and authenticate the packets, and it's going to inturn that guess state and the new sequence number state. It's going to try the next candidate until a certain limit or until it gives up. Now, this is problematic from an engineering perspective, because it requires a complex code path and I mean a really complex code path. This is more complex than everything else about the standard, that's rarely if ever executed. So, in my mind this is a bug waiting to happen. Now, luckily, this feature is ever unlikely to be relevant. And in fact, it's disabled in Vita completely, but I wanted to know that I hope future signers don't repeat this sort of thing because it's awkward to implement and frankly I don't think it's necessary.
So authenticated key exchange is the Achilleas heel of a VPN gateway. It is a complex moving component and likely to break. And when it breaks, it is really bad because it invalidates your confidentiality and authenticity guarantees. Further, we want to cycle SAs often to maintain a level of secrecy, and we need to dot rollover, meaning switching from home security association for a route to the next without losing any packets in between.
So some options here include IKE, which is nice because it's interoperable with all the stacks. The downside here is that it's complex, it's a really big RFC, and if you ask security experts they will tell you it's not state of the art either. An alternative here is to roll a proposal based on noise, noise it the modern protocol framework for TLS, example users of noise include wire guard and for example also what's app.
Or, the third option would be to roll your own protocol from scratch, but I can already hear you mutter like no, don't do that.
So, I say yes do that. Because you can learn things. And I really think that starting hacking on this problem with a lot of ignorance, presents some invaluable learning opportunities. So, I'm not saying when in production, but I'm saying if you work on stuff like this, go at it from the basics, it's reallien lightening, I don't think I would have the appreciation for the nuances of this problem space if I hadn't tried at it on my own.
Which brings me to the philosophical slide of this talk. An engineering algorithm that worked really well for this project has been the following. In step 1 you do the simplest thing that could possibly work and I guess you have heard that one before. And then in step 2, you try to break that thing and when it breaks, not if, when it breaks, you look back to step 1 and try to next least complex thing. On the lower ride of the last two slides were different iterationings of key exchange designs by the way or to be specific, the diagrams for those.
So, in the end, when exploring all three of the magic possibilities, switch engineer Alexander gal provides strong swan plug inwith Snabb. I plan to use the daemon to negotiate security associations for consumption by Vita. And Vita success current native key exchange protocol is based on a noise instance. This this way I open to support the transition path from existing IP stacks to a Vita based infrastructure using more simple minimal modern key exchange protocol.
So, on to less scary yet important topics. Configuration operations, so, Vita is better configuration states are described in a native YANG model. This model also includes Runtime statistic state. You can query an update the configuration of the Vita node, where it is running and Runtime statistics count as the are queried the same way.
So, from the very beginning, I tried to put a really strong focus on being operator friendly. The track Runtime statistics include a comprehensive set of counters covering everything from ICMP events to data and control plane arrows. And I also made sure that Vita nodes appear transparent to traceroutes, if you traceroute through a Vita tunnel it will appear as 2 hops.
On the hardware side, we currently support the we have working snap drivers for the Mellanox connect X family of cards, we also working on Intel drivers to support a much wider range of hardware, generally you can do 10 gigabit and with a little bit of work 100 gigabits networking using Vita and Snabb.
So, I want to finish this talk with an appeal. Let's Encrypt for traffic. And I want to also note that my medium term goal for this project is to terminate 100 gigabits at line rate on a generic X86 server using a fully Open Source software stack. If you are interested in that at all, then please talk to me, and DPS so thank you for your attention. Questions?
(Applause)
AUDIENCE SPEAKER: I'm tomorrow from CloudFlare, really tool talk and project. I was wondering do you gave some performance measurements, is that with or without Intel mitigation? Is that with the Intel exploit mitigations turned on or off in the kernel?
MAX ROTTENKOLBER: I wouldn't ‑‑ I don't think it would make a difference at all because we're not using any sis calls and basically we're like single user mode in a sense, we have a box and we use it completely, we don't use the kernel at all. We don't have like applications running on parallels, not really our concern.
AUDIENCE SPEAKER: Okay. Thank you.
AUDIENCE SPEAKER: Jordi pallet. A question, have you considered, I'm not sure if that's in the scope of your project or a future project, to do something similar in a low cost chip sets like those used in CPs that today are ready to support equivalent features for encryption, for example using Open Source D IRT, I'm not sure how much you depend on the Snabb because right now I believe open DRP does support Snabb but maybe an alternative. There is support for Lua.
MAX ROTTENKOLBER: So basically, at the moment we support X86‑64 and we deopinion on the features that are usually found in more higher end server CPUs, in theory, it's possible to port the whole thing to lower end systems, but I'm not sure how feasible that would be like we really depend on ‑‑ like part of the project's motivation is to show case what modern CPUs are capable of and, yeah, I think like ‑‑ but yeah, definitely. I think you can build a really low budget Vita box that would terminate 1 gigabit of line rate. Pal.
AUDIENCE SPEAKER: My point precisely, most of the low cost can run 1 gigabit. But when they use encryption by Luasing open BPN.
MAX ROTTENKOLBER: If your CP box supports them, then I'm pretty sure it can do 1 gigabit at line rate encryption.
AUDIENCE SPEAKER: Blake with eyebrows. Do you have a rough idea of how many networks are using this in, say, semi pre‑production customer traffic type applications?
MAX ROTTENKOLBER: Zero right now. This is a prototype:
AUDIENCE SPEAKER: Do you have a horizon for that?
MAX ROTTENKOLBER: No, tell me.
AUDIENCE SPEAKER: Okay. Thanks.
AUDIENCE SPEAKER: Brian Dixon GoDaddy. What about the state of drivers for NICs?
MAX ROTTENKOLBER: I think that's the ‑‑ basically what works now it the internal anti NICs, which are 1 and 10 gigabit NICs, what will work with minimal work because the driver is already present are Mellanox connect X cards and on the future road map we have like planning to support Intel AV F and recollect A FX D P which will make it more independent on actual drivers.
AUDIENCE SPEAKER: So the connect X is available now for testing?
MAX ROTTENKOLBER: Yeah.
AUDIENCE SPEAKER: Okay and number of cores, does that include hyper thread sore is that just raw cores?
MAX ROTTENKOLBER: You get an improvement, so, I would typically recommend to enable hyper threads and use these as separate cores because he can get an excellent performance out of that. Yeah, so, you don't get like ‑‑ if you use two real threads you are going to get the double the performance. If you use two hyper threads on one physical core, it's not going to be two times the performance of the physical core by itself. But generally the math applies.
AUDIENCE SPEAKER: I'm going to try that on my box and see if I can get 100 gig.
ONDREJ FILIP: Any other questions. So thank you very much. Great presentation. Thanks.
(Applause)
So now it's time for some announcements and lightning talks. First I think Sander has requested some short time for some short announcement and then Marie will have a lightning talk about Bird.
SANDER STEFFANN: Hi. Two very short requests. The first one, at a previous RIPE meeting we have presented our NAT 64 check website, which is currently in its second version and that was sponsored by ISOC to do the development for that. At the moment, I'm the only one maintaining it and I could use some help to share the load a bit. So, if anybody wants to help out there, that would be really appreciated. The other one, similar question, we just started a new foundation called the global Nokia lines and we're trying to do some good work for knocks. We could also use some development help there. So if anybody is interested, please contact me and ‑‑ that would be really helpful. Thank you.
ONDREJ FILIP: The next speaker is Maria.
Good afternoon, I am Maria, I am the developer of Bird. I will speak about what's new in Bird, what new in Version 2. And there are some new features, some features are being developed, and I will ‑‑ I'm going to suggest to you a dirty trick to overcome some feature that is not yet in, that is not yet in mainstream, but it's ‑‑ it's needed for many of you. Okay. The first thing is custom route attributes. Many of you are using loan communities or extended communities or any other communities in a way that you check some features of the route, the import, then you use these communities for some checks for dropping or filtering the route in the route server, or in the router and then on the export you drop them. It's efficient because of the internal structure of Bird, and we suggest using custom trace route attributes. You can find our own route attribute that is stripped by default and at the end of the route path. So it can't can exported and you can just define it in the import filter, then do some checks based on that router attribute and then it will be silently dropped when the route is announced by BGP or something to your peers.
The other thing, if you are concerned about performance of your filters, you can have ‑‑ you can measure how long your filter takes. It is a special protocol named passive, don't use this protocol production environment. At the most, run it for one while when measuring and then disable it. It will run a bunch of random routes through the given filter. It will put the data in the lock. You can collect them. What's in the log, there is date, time, version of, currently version of Bird, the name of the protocol instance, the ex potent of two saying how many routes are being filtered in one path and how long it took, what ‑‑ the number you want is the number after update. It's in nanoseconds, so, here, we parse the 65,000 routes, and it took 2.9 milliseconds. You can try it for several different filters.
Other thing, you can define import table, which is ‑‑ which stores the routes that came from BGP. Before they are filtered. It's not the key pinning in the filter routes. It's keeping the routes as they same (keeping) before every filter is executed. It is ‑‑ it enables a loading your filters without doing route refresh. It's needs some memory indeed.
Route auto reload an RPKI changes that you got from RPKI is not merged yet. There is the dirty work around that you set log in on RPKI updates, and you have a script that takes the updates from the log and runs a route in if some change happens. It's a dirty trick but if you need it, it's better to do than not ‑‑ than to do nothing. Route auto reload is going to be released in several months. We hope there is no problem, it's almost done. And to do auto reload, the import table will have to be, will have to be set up.
The filters are undergoing ‑‑ the filters are being totally rewritten, there is new filter interpreter. It will allow safe execution of filters, so it's a needed thing for multi‑trying for execution, and it will also allow type check by configs, so no Runtime errors any more saying that you can't add integer with drink. It will be filtered out during configure time.
You can upgrade now to Bird 2, it's stable. Thank you.
(Applause)
ONDREJ FILIP: Is there any other business or anything we need to address before we get to the last thing. I don't see anyone. Also, remind of for the next RIPE, there is a chance again if you want to become a working group Chair, we have like basically re‑elections of the Working Group Chairs every year basically for the meeting, we will send out an announcement if he we minders on the rules again, about two months before the RIPE meeting.
Okay, with that I welcome Charles and Mirjam up there. They will talk a little bit about the hackathons.
SPEAKER: Thank you for the introduction, we are going to kind of co‑present here, the idea is to talk with you a little bit about these hackathons, and the idea with them is that they are related to Open Source primarily use Open Source, that's why we thought this would be a good audience, we are not going to go into a lot of details exactly about how they run, we have done that before at past RIPE meetings. But, just to kind of give a quick overview of some important characteristics about them. And then open it up for discussions and really hear from you, hopefully, you have already seen some positive impacts of this these but we want to see what we can do to make scheme even better because we continue, we plan to continue that and we just want to make you know, get the maximum benefit for this community out of them.
Okay. So, you know, the main thing is with these hackathons, the idea is that we're trying to combine really Open Source and standards. Standards have played a really critical role and continue to play a critical role in our industry. But Open Source has also become more and more prevalent and as you have just seen with these other presentations, you know, bring some really great valuable projects into our community. And one of the challenges we have seen with Open Source, is, we want to bring more speed to the standards process. Standards are till very important to us but sometimes it's a real Cheng getting the standard done and completed in time, right, you always kind of want it yesterday. So the idea here is let's bring some of this speed and collaborative spirit that we're seeing to Open Source projects and see if we can integrate that in to what we are doing in the standards process. One of the ideas being that as we're defining the standards, and hashing things out, let's actually be implementing them. Let's validate that they are correct and complete and more importantly perhaps, implementable and take what we learn and feed that back into the standards process. Another thing is we want to have by products that come out of this as being software that developers can actually use. Right. Having the standard is great, and with we certainly need that, but it's even better if you have some code that you can pick up and like you have seen here, it's kind of people today, developers today tend to mash things up, they look out on the Internet, they find libraries, components, if you have that that makes it easy for to you add support for a standard into your product or into your solution, that's all the better, right, so that's a good way to make sure that our standards not only get completely defined but get deployed. So the idea with the IETF hackathons, it is really where we first started testing out this concept. With the idea being that we wanted to advance the pace and relevance of IETF standards. One of the ideas and I kind of mention this already, while we're working on the draft, we want to be able to flush out the details. We want to be able to test theories, right, instead of arguing about it in the Working Group session, or on paper or on a mailing list, let's write some code. Let's test it out, let's see if it works the way that we're thinking it would work. There is also a very important to us to reach out to a new community of people. You don't traditionally see a lot of developers attending standards meetings, right, often times if you look at even like a company like Cisco, I work at Cisco, you'd have one set of people who would go to the standards meetings and then you have all your developers who stay in the office and they are writing code and those two don't really necessarily interact a lot. So, the idea here, let's get them interacting, let's get more of the developers participating actively in the standards process in a way that is comfortable for them, right. The idea being that if you bring a developer to an IETF meeting, they may not be able to jump right in to a working group discussion really well but they certainly could sit down with an IETF veteran, talk with them about the protocol, and they could implement some aspects of that protocol. And problem better than the better who wrote the spec in many cases, this way they could start contributing to the IETF community in a very meaningful way very quickly, and also very important to point out these are collaborative events. Other hackathons you may have seen are very competitive, there is a lot of prize money with the hackathons, the idea is to have it be very lab tif. Over times people working on one project will actually help people working on another project because just like in this room you have people with a wealth of different knowledge sets that you bring in and so often times it's very helpful for someone to lend their expertise to another team to help them overcome some Cheng that we have: Another thing I wanted to point out is the growth of the hackathon, if you see the slide at the bottom. There was about 40 people there and that seemed good at the time, I was excite that had we had that many. The next one was about twice as big and this just continued to grow since then about ten fold to where the last one that we just had, we had 400 people, and ‑‑ so, when you think the whole testify meeting is around 1,000 and 1,200 people, that's a significant number of people showing up to code related to what's going on in the IETF.
So how they work, just very quickly, they are free, they are open to everyone, much like the IETF is in general, but you know, some hackathons with a more closed thing, this is open to everyone. Not only that but everyone could bring a project. The project duds need to be related to IETF technology, but anyone, you don't have to be an area director or Working Group Chair or the author of a draft. Anyone can bring a project in and champion it and lead that project within the the context of the hackathon. So we end up with a lot of small teams. The last one we had about 40 teams. If you can see this picture here, there is about 40 tables in the room, and the team sort of self organise, they don't have to all sit at one table, but that's kind of typical where you know, you may have some teams a little bigger or smaller, but there is basically a lot of relatively small teams working on different parts of IETF technologies. It happens over the weekend which is very important because there is a lot of people who are there just of their own kind of free will, they are not being paid to attend, they can go over the weekend. They don't have to take time off. It's free so they can just show up and contribute. Now, at that will the of people who do participate in the hackathon, a the majority of the people who do say are for the IETF but there is a significant percentage, probably 20% or so, who are there just for the hackathon and we welcome both types of participation. So, just to make is logistically as easy as possible for people, we provide coffee, lunch, dinner, but we don't stay up all night. This isn't one of those coding marathons where you work for 40 hours, we realise that there is the IETF meeting happening, continuing throughout the week afterwards. So, we don't want to destroy anyone during the hackathon.
Another important aspect to to share results. This has become a bit of a Cheng. Just an example of one of the challenges we have. As it gets larger and larger and you have more and more projects, imagine 40 projects, what have each project came up here and gave 10, 20 minute discussion of what they did. You know, that would take more than a day. So, we limit the presentations to three minutes. We still feel it's very important to share results but we try to get it to be very concise. So, just highlighting what it is that you try to do, how things worked out. Very importantly things that ‑‑ ‑‑ ‑‑ we're trying to build community and grow people's networks and bring kind of new people into the IETF here. So, I show a GitHub page all the project presentations do go up on GitHub. Some of the code ends up going into the this GitHub organisation too that we created. But often times the code ends up living elsewhere for, because the code actually, the Open Source project that theory working on maybe existed many years or you know, it exist the outside of the context hackathon, it's not like the code needs to be there.
One other event that I wanted to mention. The hackathon at AIS, that's the African Internet summit. The idea being here was to try to take some of the success or some of the spirit of the IETF hackathon and to bring that to Africa. ISOC works to try to get some folks from Africa to attend the IETF meetings, and sometimes they'll participate in the hackathon too and we have had moment teams from Africa participating in the hackathon, but the idea here was let's, since travels is a Cheng for a lot of people recollect let's sort of bring a version of the hackathon to them, and it was kind of modelled after the IETF hackathon, all the projects are related to IETF technology, it happens during the course of the African Internet summit and near the end of T I didn't put the exact dates here, but the hackathon itself would be on the 19th and 20th I believe of June at this.
.... lower areas to deployment. And ideally to contribute contribution back into the IETF standards, whether that be teaching us about things that are wrong with the standards or co‑ought erring a draft or whatever it may be. You see the list of projects here, they are all related to IETF technologies, I thought it was interesting that one here measuring DNS using RIPE Atlas, I have heard a lot of talk about DNS here, so I thought okay that might be of interest to folks here. The IPv6 one, the idea there is help with getting more deployment of IPv6 out by adding support for IPv6 into Open Source applicationings and projects that don't support it yet. So, another great project. With that I think I'll turn it over to Mirjam. It's modelled very similarly after the African ‑‑ sorry, the IETF hackathon, sorry, I forgot that. That for the introduction for setting the stage, I am Mirjam Kuehne, I work for the RIPE NCC and the team that organises the RIPE NCC hackathons and I don't want to go over exactly how we do this because there are a lot of similarities to the IETF hackathons but there are also some differences. So, ours are much smaller, we are kind of limiting the number of the people that can come and we can have almost like a vetting process in place beforehand so you'd have to apply and there is a certain topic U usually to all of the hackathons and people apply to participate and there is like a jury who vets the ‑‑ well, applications or submissions, and depending on skills and the background of the people and what we need at the hackathon, and also to ensure a much, like as diverse as possible teams there. So, our teams are ‑‑ so it's mostly like 30, 35 people, each team 3 to five, so it's more like eight to ten teams rather than 40 I think you mentioned that you guys have.
But, the other goals are exactly the same. I bring together good people with different skill sets and working on certain projects. As I said they are usually topics set beforehand. I have a list of them in a second I think. There is so much communication before the hackathons, there is a mailing list where participants and past participants can discuss projects that are potentially before, and obviously there is a lot of fun involved in that will as well.
Here is a list of topics that we have covered so far. In the past in the beginning it was mostly about RIPE Atlas when we started developing RIPE Atlas and there was a lot of interest in helping out with tools and visualisations and data analysis of RIPE Atlas and other datasets that we had available. We also had a hackathon on exchange point tools, and then a follow‑up a code print on that. We had some network operators tools, DNS, use IX and then recently we had one that was presented at the previous RIPE meeting in ‑‑
And legacy we had recently in RPKI not a hackathon but more likes a hands on development workshop we called it a deploy on this, we had another new word in that.
And other important things, some of them, we always find good coffee is the main ingredient of a hackathon, of course after good people they are. And sometimes the venues don't understand, we have a coffee machine, no, networks you don't understand, good coffee, sometimes we bring our own express owe machine and leave it behind at universities or wherever we organise these things. What's also important is that we kind of celebrate the results and I agree with Charles that the presentations and the demos at the end are very important and when we are talking about the presentations yesterday, we thought about, we could save sometime at the end of these hackathons by not having these presentations and demos but it's important for the teams, you know, to present their work but also kind of to encourage and force the people to have, come to a conclusion, and not just let it kind of fizzle out at the end. And that's one of the challenges we can discuss later on is how to make better use of the results after everybody goes back home. So there is some success stories, some code has been incorporated in libraries and other tools, so, that's ‑‑ that works pretty well. T‑shirts are very important. We also talked about this, what kind of parts of the costs can we cut down? T‑shirts is a no go, don't cut down T‑shirts. Here is some examples of what we have done in the past. This is a picture of high colleague who is organising this, she can't be here unfortunately this time recollect that's why I'm presenting this for her. So, people collecting these and obviously they are not the same people the at every hackathon, but these T‑shirts are quite popular.
Some challenges maybe, I don't know do you want to start with that, I think that ‑‑ let me see if I have mentioned every. I wanted to say one more thing. It's kind of a Cheng for us, I don't know if I have that there. You said the IETF hackathons are always on the weekends before the IETF, that makes sense because the IETF has that space available and people come to that meeting anyway. We had some of the hacathons before, they are attached or in conjunction with the RIPE meetings at the beginning, the ones over the last two years I think we also organised more stand‑alone and has pros and cons, I mean, the advantage of having them at the RIPE meetings or in front of the RIPE meetings of course is less travel for people, they are already at the meeting, results can maybe be fed better into the meeting week you know to certain Working Group already, you can talk to other people about continuing the work. Advantages, it is a very long week and you are already kind of tired at the end of the hackathon and I know some of the participants at the IETF hackathons don't stay for the IETF, just come for the hackathons and leave. We kind of have to balance of pros and cons there, this could potentially be also quite an expensive event to do it in front of ‑‑ at a meeting on a weekend.
But yeah, do you want to go through your parts of the challenges?
RICHARD ECKEL: So, you know, some things like IPR, like this was ‑‑ in IETF, right, there is the note well in all the meetings happen under that policy. Most of the code we work on ends up being Open Source, or is Open Source in the hackathon, so that helps out a lot but we do allow people to work on proprietary tree code, so we need to think about are what are the rules this and for example, we have said that the rules are, as are stipulated in the licence of the code you're working on, right, but that's something that you need to think about in the course of these hackathons and see if that's going to be a problem. The scaling, I hinted at this earlier, but when the hackathon, when the IETF one was 40 people, like, our format looked a lot more like what Mirjam was talking about the RIPE hackathon, but as you get larger and larger, which is you know a sign of success, and we want more and more people participating, but at the same time, it makes it harder not only to share results and that type of thing, not only the increased cost but also for newcomers, it's a little bit harder to go into a room of 400 people and try to figure out where you should work as opposed to a well organised kind of set of maybe five projects that you need to understand what they're doing and then pick un. Also we talked about this, the connotations with the word hackathon itself.
MIRJAM KUHNE: I put that in hackathon has a connotation and we want to be as inclusive and diverse as possible, on the other hand that word is kind of in the community, everybody knows what it is but we have been experimenting with other terms, like for instance at the NCC internally we had what we call a collabathon, we had the deploy on this, some people called it a creative workshop. It's just a word but some people feel for attracted to it than others and then you'd have to explain what it actually means, it's not only coding and so that's just a point that we started to discussing, if it actually puts people off and that we would like to have there.
RICHARD ECKEL: So, you know, part of what we're hoping to achieve here is even if we don't, maybe we find some solutions right here and now, but we at least wanted to open up the discussion and that's why we called it a round table discussion. I'd like to hear about ‑‑ I mean actually, how many of you have participated in a RIPE or an IETF hackathon? That's actually a decent number. So that's great to see, so, some you have had firsthand experience that way. Would I hope that even more of you have, and somehow been impacted hopefully in a positive way from some of the results coming out of the these. Maybe you have seen the results. Maybe you have seen some of the code. Maybe you have used some of the code that was worked on in those hackathons. But you know, hopefully you have been touched in some way by them. So we'd like to get your input on any of these or things that we didn't even put on this slide that you think are either potential areas for improvement, that would be fantastic to hear.
MIRJAM KUHNE: How do you find the results. That's why I put that up here, people went I didn't even look, what other people have worked on before. I just would like to start from scratch and so how do we make better use of the results? You know, potential topics like anything, yes, please, I think Spencer was first.
AUDIENCE SPEAKER: Spencer, I was transport area director at the IETF for six years including all the years that the IETF had done hackathons, and was responsible for at least three Working Groups that were to participating in hackathons there. This made a huge difference in our ability to produce protocols more quickly, especially in large Working Group, the quick protocol, especially. The thing I wanted to call attention to on that was what you said about anybody can champion a table basically, and that I know some of the Working Groups that I had that were participating in the hackathons, I don't know that I was in all of them and I was the area director for all of those Working Groups, so the idea that people can you know self organise and try something new something different or maybe there is something that we'd like to bring into a working group that the Working Group is not ready for yet, but let's see what progress we can make, but this was a really valuable contribution to the way the IETF produces protocol specifications.
MIRJAM KUHNE: So, if I rather you correctly, so you are saying like a bit more documentation, or I mean who is working on what project or I mean you have that more or less, right.
RICHARD ECKEL: Yeah, we use the WIKI, you know, we have a WIKI and all the ‑‑ I mean, anyone can add content into the WIKI, and so how well described each of the projects is, how much information they have in terms of a new person getting started ‑‑
MIRJAM KUHNE: Is there like Anna lumbar nighilist? We have ‑‑ of course our number of paragraph pants is much smaller but we have a list of previous participants.
RICHARD ECKEL: We have a hackathon.org list.
AUDIENCE SPEAKER: Tim VATTen back, so I have been about to two IETF hackathons so far, and to anyone in the room, if you want to get started, or are interested, or if you're interested in what testify is doing I cannot recommend it more to just, yeah, go to the hackathon because you have two days of all the people working on the stuff you might be interested in, you have the authors of drafts sitting next to you which can answer your questions, can help you getting started. So it was a terriffic experience for me and it really helped me getting started in the IETF process. So a huge thanks.
RICHARD ECKEL: Glad to hear.
AUDIENCE SPEAKER: Randy Bush. I have been to hackathons on both groups and others. I found RIPE's not as diabetic friendly as IETF's, but better coffee. I'm interested in the upper right‑hand corner, and is it really just the wording? One thing I have noticed is very little cross fertilization. Very little cross fertilization. I don't know what's happening at the next table, and really, when I think about it, why I go and what did I get? And I got to sit with three other people that I normally only communicate with over the net, and we could actually talk and interact more closely, and I don't need to sit with 30 other tables to accomplish that but it gives us the context and it's really a shared facility, it's like going to summer camp together. So, I would also note that I think the RIPE hackathons to which I have been, were much more academic friendly, more researchers, not so narrowly focussed on ‑‑ and more operator friendly, and in general, RIPE is more operator and academic friendly than the IETF, so no surprise there. But, I think that is the most interesting part of all these slide decks.
MIRJAM KUHNE: We stole that somebody else's presentation.
RICHARD ECKEL: Before you sit down, did you have some thoughts about what might work ‑‑ what might be good say this the context of the IETF hackathon where it's a little challenging due to the sheer number, like at the beginning we had all the projects before they started working, say what this was they were doing and if you only had five projects that worked well, but when you had 40, no one wanted to sit and wait a few hours to hear about all that. So, I think ‑‑
RANDY BUSH: I find the IETF hackathon space uncomfortable. Well, not only the crowdedness of the it but the mass of it. Why do I need all that happening and ‑‑ right ‑‑ I don't do well with that level of visual and auditory stimulation.
MIRJAM KUHNE: I guess we have been talking about this right. We made a conscious decision to cap it. And you thought about this too right you want to have it. Oh but yeah, I guess there are pros and cons for both.
RANDY BUSH: And the RIPE hacathons to which I have been were also in spaces that were big enough.
MIRJAM KUHNE: That you can find like other corners ‑‑
RANDY BUSH: There was space, right. I wasn't ‑‑
RICHARD ECKEL: Yeah, that is a Cheng. And it's probably going to get worse if we continue to grow the event, right. So we're looking ‑‑
RANDY BUSH: If it is the event, and if it is the same format.
MIRJAM KUHNE: We have five more minutes left, shall we cut the lines after a lis afore now.
AUDIENCE SPEAKER: Spencer again, I just want to follow‑on what Randy was saying, and mention that the IETF started at the last testify meeting doing the code lounge which was available, you know, space like tables in the hackathon but smaller space, but it was available all week during the IETF meeting week. So, I think that pushes some of the buttons that Randy was talking about, which I think are very real. So, I just wanted to mention that as well.
RICHARD ECKEL: And we are getting more and more people using that space as it becomes kind of known that we have that space. So during the whole course of the week we have a large room that's not nearly as crowded. It's basically the IETF lounge and just encourage people to use had a space.
AUDIENCE SPEAKER: Benno: I agree with Randy's observation, but it's the goal of the IETF. So I think the operators and researchers are very welcome or feel very welcome at the RIPE hackathons. A colleague of mine did some hackathon on DNS and the reasons they were incorporate in the Atlas probes measurement, so it was very, very useful for measurements and research. Speaking for myself, and the DNS tale at the IETF hackathon were very, very useful to make progress on the drafts. And running out an inter‑op toss tests. I think both settings served their own goal.
MIRJAM KUHNE: Thanks.
AUDIENCE SPEAKER: Alissa Cooper: I find this interesting actually both what Benno and Randy said because the IETF hackathon doesn't limit what you can work on or the kind of project that you have there necessarily, and so, I think if there are other than just generally having IETF be more attractive to operators, there were things about the event in particular that we could change to make it more attractive to operators, I would very interested in learning what those are, other than just you know the perception of what kinds of things people are working on there.
MIRJAM KUHNE: Thanks. We have one more slide actually that I wanted to point out, actually, there is an article that Charles wrote about like a report from the last IETF hackathon which we have also published on RIPE Labs, and there is a page on RIPE Labs also with all of the previous hackathons and the results and the links to GitHub and I'm a bit like what you are doing I guess on RIPE Labs and you can reach us there the e‑mail address for the RIPE NCC side of things. And we have just published a poll, I just wanted to point you to this, which asks for an indication of potential future topics for hackathons, it's maybe not so relevant for the IETF because you kind of have a defined scope, maybe more defined scope on topics like related to IETF protocols, whereas here it's ‑‑ there is a whole list of what we could do in the future and it would be great to get some feedback on you.
And ‑‑
MARTIN WINTER: First of all. Thank you. I had one last question, because I mean, I think the two of you, the IETF hackathon and the RIPE hackathon for me was like some of the good ones, the ones I really liked. I had some other hackathons I had the feeling that they are not that well organised or sometimes a bit too much commercially focussed. Are you actually in contact with people who organise hackathons who try to exchange each other ideas and try to get the level up or help each other out? Is there something like that?
MIRJAM KUHNE: We should start a consultancy to organise hackathons, that's a good business model.
RICHARD ECKEL: What we have tried to data networks because really each hackathon it depends on the goals of the person putting it on, right. And so, you have to look at the organisation if you look at IETF and you look at RIPE. If you like those organisations it makes sense that you would like their hackathons, particular vendor, if you're Dunne you know you may not like exactly what they do with their hackathon because maybe they are not a big fan of them any ways.
MIRJAM KUHNE: We have a lot of documentation. We are getting contacted and especially Verizon nay who has a lot of experience organising this, she has been contacted a lot for help and advice and she has a lot of documentation that we can provide to others who are trying to organise hackathons and don't quite know where to start. So yeah we'd be happy to share that of course.
MARTIN WINTER: Thank you. So anyone else who organisations hackathon I encourage contact them for that.
RICHARD ECKEL: Please find us at the coffee break or whatever, happy to talk to you more if you have ideas while it's kind of still fresh in your mind. Thanks a lot.
(Applause)
MARTIN WINTER: Okay. That concludes this Working Group session. Thank you everyone for coming. See you again next time in Rotterdam in October.
LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.