Archives

Plenary session

20 May 2019
Reykjavik
4 p.m..


CHAIR: Everybody, please take your seats. We're about to begin our next Plenary session, and just quickly, because we have lots of empty seats in the middle of the house, please don't mind taking these, even if you have to step over somebody's toes, just don't push on them hard.

That would help us to have fringe room for latecomers. Please keep the door closed when you enter so the noise doesn't propagate into the room.

So, here on stage is our speaker from Cisco systems, Mikhail Korshunov. It's about streaming telemetry, considerations and challenges.

MIKHAIL KORSHUNOV: Hello. My name is Mikhail Korshunov and I am technical marketing engineer at Cisco systems and I work in service provider business unit. Our agenda for today we'll start with a brief introduction for telemetry, why it's important, what advantages it brings to us and we will dive deputily into components of telemetry, what should we choose, what considerations, when we start to use telemetry and recurrent progress, how we can leverage telemetry on our devices, and what's the most popular use cases and a final note will be closure.

Streaming telemetry has a few aspects in its own. It's automated communication process together with data from remote or inaccessible device for further monitoring or analytics. Streaming telemetry was designed with a few key aspects in it. First one is fast. We should push as many counters from our devices and we should do it really fast, as fast as possible. It should be easy to configure and this is a key aspect for each technology.

Also, the data which we receive should be reliable, that's why we can rely on TCP or for transport. It should be deterministic, so we should know how to find the data we receive and we use times forward. Of course it should be useful. So we should have a whole parity with SNMP and coverage with models and as much as possible. So the device should push as many counters as it could, and without the heavy user consumption.

With telemetry, we need to focus when we just devote the technology itself, was to embrace SNMP, provide a nice coverage for YANG models and to get users, less usage of COI. This was initial focus and we already covered both aspects in streaming telemetry.

For step 2, it can be considered as a carrot as well, because a lot of vendors support unchanged telemetry, or even waist telemetry, and we also can do MPU starts trading from devices. As a future ideas from telemetry perspective, it may have a different directions from our point of view. It can be extended to OpenBNP, we can do inbound telemetry or decide where to go, what direction.

Once we have the idea of telemetry and the key components behind, let's proceed with telemetry consumptions.

First you go with models. And you need to decide which exact models you want to use. There are tonnes of native models for your device and you can think of them as a 101 tool chain or 91 N1, as you think of it. But it is advantage of first approach when you use native models. Versus vendor specific and once you start your device with models, you need to enable different models for each vendor, or different operating system. And which concerns you should have in mind when you start to export telemetry?

Very easy solution for OpenConfig models and if we go OpenConfig to unify how the data is sent. So we models equal but there are still some nuances as well. And we need to pay attention to those details.

For example, we have example of YANG model for network instance and as you can see, there is revision for this model. So, once you upgrade your device to a newer operating system or to latest version, you may have a the support for late distribution. A revisions may be different in terms of implementation and the model itself. So, once you update the box, you may receive a slightly different data. You need to precise structure the revisions and validate them, once you start to use telemetry.

Revision is one aspect. Another one is deviations. And with deviations you need to check for specific deviation definition in the model for each OpenConfig model. So, we have two examples with different deviations, and this is the second concern which we need to be aware of.

We briefly talked about models and let's proceed with protocols.

Right now in terms of protocols, I always start with supports like free major one and of course our first choice is gRPC, because gRPC is a model too, it has a lot of benefits which it brings to us and has a lot of things in various languages and a huge community across gRPC. But, we are not limited to gRPC and TCP can also be used. TCP is well known in network engineering industry, and it's reliable compared to UDP, but you have smaller amount of advantages compared to gRPC and UDP, it's very easy to configure, the handshake is the weakest for UDP, but you do not have any reliability aspects when you use it. And you can see miscorrections or errors once you you have your device on UDP.

As I mentioned UDP and TCP are our first choices and it's good no know once you develop your own home grown collection, there is some header inside, and if you do not use like provided tools from your vendor, you consider to go for the documentation for internal header for telemetry and how to decode it and which fields are available in this header.

When you start to use gRPC, this comes with overhead. And various details available in it. For example, you have magic numbers and you have multiple exchanges between your device and collector. You have HTTP details for your session and window size adjustment. So this all happens when you go with gRPC.

But, when you start to use it, you can also control with speed from the connector as well and that's a major benefit. Because the connector can adjust with speed.

Very popular concern and question about security. With gRPC you can enable TLS and recommendation to go through once you go into deployment. For send box or trial for your, like, telemetry testing, you can go with either TLS option, but for example in GNMI TLS is enabled by default and if you want to switch off your ATOS encryption you need to explicitly mention it in a config for your device.

Many vendors claim that gRPC is supported by their boxes but again everything is hidden in nuances. Once you start to go through implementation for your gRPC methods, you can see significant difference in implementation. On the left side we have gRPC implementation from Cisco, on the right side from Juniper, and you can see that all RPCs are completely different, different naming, different signatures formatted so if you think just because of gRPC is supported you have unified implementation, it's a wrong assumption, and always gRPC files are valuable on GitHub so you can manually check them.

Once we discuss the transport and the protocols, now we need to define end coding, because encoding is crucial, it will affect our utilisation and the tool chain which we use.
The first and most efficient one is GPB‑Compact, everything is binary when you use this format. You have the highest efficiency, but for each model which you transmit you need to have proto file full and this is similar to the the code your message, you need this on both sides from the router and from the collector and once you have multiple sessions of operating system, your model definition could be different so you need to maintain multiple proto files and here comes the complexity for usage with Compact GPB. This could be authorised for beginners, and in Compact volume, you have keys are presented as strings, and your data is two binary. It has medium efficiency but you need only single proto file for it, and it reduces your operational complexity significantly.

And last but not least, on IO6R you can use JSON as well, it was added by request from our customers, it's very popular in the network coding community and the others, and with JSON, your telemetry messages could be simply digested in some message basis or some other small agents which you want to use or leverage. And most of the network engineers are familiar with JSON format. So it's very human‑friendly, a lot of people are familiar with it. So, it's a valuable here as well.

Once we compare numbers, it's pretty obvious that GPB is most efficient one. In second place we are Key‑Value GPB and in third place on the podium takes JSON, with the biggest message size.

And coding transport and nuances for telemetry already covered in this section, but when we start to use it, we can see how much data would be generated. And we conducted synthetic tests in lab and generated 300 gig counters every five seconds and that's the details how much traffic was generated by telemetry. Of course you can see what GPB is most efficient one, can he value and JSON takes second and third places respectively, and as I mentioned previously, Key Value GPB is the first good start for newcomers to telemetry. And in terms of transport, it's more about, it's not about size, it's more about tooling which you utilise.

How telemetry will use your links. If you have smaller collections, you will have just small spike in your system, and your data will, therefore. When you talk about large collections, once you will get the data and because the telemetry is multi‑threaded you can see smaller increase in benefit of utilisation and event spike. How also if you consider to use unchanged telemetry or event‑driven telemetry, you may see just like small spikes all over the time, and we do not have any specific guidance, so here it's mostly for model telemetry.

A lot of open source tools are valuable for you to start consuming telemetry today, and this is our recommendations for open source right now. Let's review the main components.

On the left side you have your routers, your network devices from which you start to stream your data. And when we go to pipeline, this is open source collector written and you can start to digest all your data through the pipeline. Once pipeline is authorised you can configure the settings in it, and push your data to multiple like receivers. It could be influx data, it could be [] pro meetious or Kafka directly, whatever you prefer. In some cases you may want to develop your own collector or use some commercial distributions, so here the option for the stack it's based on your choice. But there are a few things which you need to consider when you start to deploy it. Also, a few other utilities are valuable in your collection stack. For visualisation you can use Grafana or [] can I Bannagh, for example. You can just import them right away. So, to not construct completely new scenarios or use cases for telemetry, there is no need because basic sets already covered.

And also, there is no other thing on this schema, and in Grafana you have, like, built‑in audit mechanism but you can use different components like capaciter for utilisation and auditing for delivery.

You start to use some kind of collector, but you need to make sure that your collector is fast enough. And very nice utility nixer which allows you to show how much data you have buffered. If the buffer will exceed some threshold, the router will throw it back and the amount of data will be reduced. So, you need to make sure that your collector has enough horse power to digest all your message. Because typically, for one collector, based on utilisation, it can consume data from 30 to 50 routers, so...

Also, on your collector, it's good to use SSD compared to HDD disks, because we have much faster speed and they are not much more expensive right now. With HDD, you can see that sometimes your collector may not be able to write all the data because of the lack of performance.

Some other interesting use cases could happen when you configure telemetry for first time, you need to make sure that all your timers are in sync, and you need to check the time settings on all the aspects of your stack. It should be on your device, it should be on your collector machine, and otherwise you may see something like that, you have receiving the data, everything is looking good from the router side, but you still haven't seen data for many hours.

Once you choose time series database, here it's important question because it's the place where all your data will be stored. And a lot of aspects to consider when you select your database. First one is how much data you can save, how much ‑‑ how big is performance for your database. Query performance, what about writes, what about reads? Also, what about community size? You know, you want to be sure the community will not go away. What about support? Pricing models for it? And the reliability and maturity of a product is another concern, because you can configure multiple retention policies for your database and here it's a question of for how long you should care for your most sensitive data, for the data you don't care about or which you use to build larger datasets.

What's the use cases for telemetry? With telemetry, you can see a lot. On the first graph, you can BGP from Grafana and that's an additional plug‑in which you can install from Grafana directory and you can see how many entries we have for BGP prefixes counts and on realtime changes. If new prefix is added, you will see the part of the graph in the green colour and if something is withdrawn, you'll see a red sign.

Realtime traffic load. Of course you want to monitor utilisation for your device and we publish a lot of documentation how you can start with telemetry today. So that's the basic use cases you go for. Another use case is to check by RIB and FIB consistency from your box and there are much more use cases, like monitoring for your objects when you can predict when your plugger will crash based on degradation based on output data.

A lot of commercial distribution already started to support streaming telemetry today, and we expect to have more players on this picture. So, it's a good time to start investing in telemetry.

The new thing is streaming telemetry right now, it's gNMI and this could address some of the existing issues for telemetry itself. It's network management interface developed by OpenConfig and mostly led by Google. It's data model independent. It's authorised on gRPC and takes the advantages of HTTP/2 and also it has usage toolset around it. And we go off gNMI is to provide a standard approach for encoding and transport protocol across different vendors.

gNMI is supported from 651. The only configuration that is required on the box is to enable the gRPC port and start to listen to it and if you want to switch off TLS you need to provide this comment in place. And gNMI is supported by many vendors, with specifications 0.4.

And let's take a look at final messages. Telemetry, it's easy to start consume it and a lot of use cases already built for it.
Plus, you can go for the documentation and import the Grafana dash boards or have a collector already configured for your plus, your time series database and the scripts are also valuable and here you can start to use them now.
You can go for xrdocs.io for more documentation on it, and to read more about where we are heading, what's the current state and more tips and tricks how to device telemetry, and to get more from, information from the device, what is streamed right now and how to use telemetry more efficiently.

And we shouldn't forget that telemetry is just a piece of bigger picture, or a puzzle. And you also have your whole programmability story. Another aspect is to provide an API for your RIB or a GNOI for your operational comments.

That's it for today, and thank you for your attention.

(Applause)

CHAIR: Any questions from the audience? We have some minutes for that. Wow! I don't know if that's a success, I did enjoy your diagrams. Thank you.

MIKHAIL KORSHUNOV: If you have any questions, just drop me a note.

CHAIR: Right. Any remote? Okay. So, then I'll switch off to Kemal Sanjta from ThousandEyes, another San Francisco company.

KEMAL SANJTA: Hi. My name is Kemal Sanjta and I work for ThousandEyes. We strive to provide network indulgence solutions for our customers. This is something that I am quite passionate about, and network monitoring, it's a big topic in our industry, right. And more specifically, the topic for today is going to be do we need to rethink that very name, network monitoring?

So, when it comes to network monitoring, we need to start thinking from the perspective of trouble shooting lifecycle and any single trouble shooting scenario involves issue and regardless of how we find about that issue, right. The issue can be reported by customers, alerting systems and stuff like that. With small digression that in 2019, if your issues is reported by a customer or end user, it's borderline tragic. Then we engage in the process of troubleshooting, and in the end, we conclude, we find out what was the root cause analysis and we potentially publish it depending on the type of the company that we run.

Now, historically, those two tools, traceroute and ping, their starting point for or troubleshooting and for most part they did their job correctly, however, due to the fact that they heavily rely on ICMP, there are problems ‑‑ and the ‑‑ they have the problems of failing to discover nodes, failing to discover links, reporting of false links and ping for the most part that does its job correctly but the problem is that there is a very strong reliance on control plane, right.
So over time we realise there are a good starting point but with we need to improve on this traditional troubleshooting toolset. Therefore, we started using MTR, which stands for Matt's Traceroute, which shows end to end‑to‑end packet loss, latency, standard deviation, stuff like that per hop. Paris trace route solves the problem of traditional trace route is part of which one of the crucial values in five part that are used for hashing, which is source port, as part of five part double, we all know that we have source IP, source port, destination IP, destination port and the protocol, right, and while traditional traceroute changes the source port per hop, Paris traceroute fixes it, which means that we are getting consistent paths and that's how we solve that problem.

Dublin traceroute is one that gives us opportunity to peak beyond the net boundary and our friends from the Netherlands or more specifically in NLNOG Ring developed this toolset as part of which are certain requirements to be a part of it but, once you get to be part of it, you get multiple vantage points all around the world with this amazing ring toolset which gives you the opportunity to have the eyes lied traceroute and stuff like that, which is pretty cool. It's quite important to mention here RIPE Atlas project as well as part of which you can get ten thousand probes deployed all around the world, which gives you multiple vantage points, with some basic troubleshooting set of tools, such as pings, traceroutes and stuff like that. So, we improved historically on that one.

Now, I briefly mentioned during the opening slide where I spoke about the troubleshooting lifecycle about the various source for alerts, and as part of that, we have syslog which is part of which we are sending, syslog messages, the syslog server which basically categorises based on the severity, SNMP is quite popular these days as well. Rise of streaming telemetry as our Cisco colleague just spoke about, and we have various collections, various collections are quite interesting one and they arose from the fact that operators realised that not everything that you are operationally interested in is exposed via counters on your networking devices. So, basically, people started writing software which collects in XML or NETCONF or other ways to pull the data from the router. Now, the real question is, can your control plane handle it? And that's a big one. So, for example, I'll give you a practical example of a company that had a problem as part of multiple different collections trying to collect data about various things that at the same time from the same device as part of which CPU was spiking to 100%, and if you think about it routers have much more important thing to do, such as making routing decisions, right. So, basically, there is a risk for, from the perspective of the control plane.

So what's the real problem in all of this? The real problem is time, and time is a problem from the reactive nature of troubleshooting. We find out the issue and depending on what do we use to solve that problem, basically it takes sometime to troubleshoot that problem and come up with the root cause analysis, right. So we have a problem of slow response, which is very connected with the service degradation that can go on and on which ultimately results in unhappy customers and unhappy customers means that in the majority of the cases you are going to lose your business, and that's not something that you want to deal with.

So, there is an open question here: Is there any way to be proactive when it comes to identifying various networking events? And based on the certain research that I did on this topic, it seems that there is no publicly available papers, research papers on this topic. However, we know that certain large scale companies that are deep into deploying white boxes have the advantage when it comes to this. And they have purely advantage on this because they control everything from manufacturing of the devices to deploying of those devices and controlling their complete lifecycle. So just based on analysis on frequency of the failures of white box devices, they can be basically figure out at least what do they need to feedback to treatment so that they can order those parts for the locations which may be remote, which are often remote in fact, so they don't struggle when it comes to capacity which they need. So, that's a very simple one. But it's a good start, right. But, generally, there is no published papers about this.

So, how to improve this. We figured out at the time is a problem, right, and we realised that the real way of actually sorting this problem is by automation, so if cannot do anything proactively here, we realised that the best way ‑‑ the best thing that we can do is basically to automate operations out of our regular job. So as part of that, as part of that journey, we discovered Python and multiple libraries, we discovered Go Programming Language with its concurrency and some people actually went ahead and wrote quite useful frameworks such as Ansible, Salt and other ones that are basically helping us achieve our job in much smaller amount of time. So, for example, Ansible can be used for data centre deployments and sufficient like that.

So what we realised after deploying the automation that is sorting our operational task is that we started doubting that basically vendors start telling the full operational truth about the performance of our networks, right, and that's a big one. So, how many times have you heard that line cards are being reloaded as a result of solar flares? It happens, right. It may be not happening in the smaller data centre or smaller deployment but if you are working for the large scale company, this is something that you are going to get as a route course analysis very often from very popular vendors that are deployed.

Context for exactly that issue that you are trying to troubleshoot or identify are not get exposed and you know when you find that when you engage attack so when the first level of attack basically the second one to third one, now you are like a couple of flares into troubleshooting this cycle, and then all of a sudden we can figure this out but it's not user exposed and you are like how is that helping me? Right.

Or, counters exist but you need to be basically very skilled magician to attach yourself to the line card and to understand deep device of the architecture of that line card such as ASICs structure and remember ASICs change very often and now every time the ASICs change, underlying the architecture of the line card changes and you are back to square one.

I was working at a company where back plane was struck by a single misdrafted packet that took fully redundant N plus 2 back plane down as a result of which we lost full chassises, right. Losing one chassis is usually not the problem but when that same packet hits the complete layer of those devices, you lose a point of presence. Not a problem except for the fact that it was a big one. It happened on two different continents. Now it gets to be a big problem, right. And in the end, with the rise of this software development and the force that we are putting into operations when it comes to networking, the real question is, can your control plane handle it? I mentioned briefly the case with the various collections, right.

So, once we got confident about the automation, that it works actually first like we deployed it and we are like does this every work? We ended up in a situation as part of which we got like to adopt vendors, so we got a product called vendor distrust. So, how do we work around that? We work around that by utilising something we call active network monitoring. So, to describe some of the challenges with the active network monitoring and to define this, I'm going to try to use this example of common design challenges in the data centres or large scale data centres these days. So, large scale enterprise networks started moving towards the CLOS fabrics and these are the fabrics that are built with spinal leave player, right and that was all done just purely from the perspective of limiting of the blast radius, so the idea there is that it's potentially better to lose smaller scale devices that are, for example, I'm just making up the example here, fully loaded Cisco ASR 9022 which can take 160 teras of traffic, right. Now, the problem there is that smaller scale devices in turn suffer from smaller RIB or FIB or in general weaker control planes, so if those big chassises had the problems with the control plane, this problem got to be elevated on those smaller chassises.

Now, I did some very small research in devices that are currently used for building CLOS fabrics, so Juniper PTX1000, 24 by 100, 2.88 terabits of capacity. Right. Cisco NCS5000, is 32 by 100, 3.2 teras. And the Arista 7170, and all of those devices, I think Juniper PTC1000 is a big device. Now, from the data centre perspective, you get some really good benefits, for example, you get smaller electricity fingerprint, which is something you want, also the benefit is that it doesn't consume as much as space and it doesn't heat a lot. So if you are in data centre design, you pretty much understand that those are quite important things to take care of.

But on the other side, actually are those smaller scale devices, regardless of who you are or what's the network that you operate, losing between 2.88 teras to 6.4 teras is something that your customers are going to feel, right then again I guess it depends on the angle. I guess it's better to lose 2.88 teras in PTX1000 rather than losing a fully loaded ASR with 136 teras, right.

So, basically, what is active network monitoring? Active network monitoring is nothing new. It's basically the idea is part of which you are utilising data plane to measure experience and how are you doing that? You are basically using synthetic traffic. You are similar laying end user traffic from source to target and then you are measuring packet loss, latency, or whatever you're service is. So that's the basic idea.

Now, practical applications for this solution, they are commercially available solutions that are doing this. So basically they are going to be simulating your user traffic and providing you, alerting and stuff like that. And they are Open Source solutions. One of the most recent developments when it comes to active network monitoring came in the format of Matroschka prober, which is developed by this German developer and the idea there is that you build unidirectional MPLS tunnels that are mapping all the possible paths by using all ISPs, and if you experience loss, for example, as one of the metrics, you can quite ‑‑ if you have three tunnels, there is likelihood that one in three tunnels are experiencing that loss, so you can quite easily navigate which one it is.

It doesn't unfortunately, as far as I'm aware, it still doesn't have the capability to automatically report which path is reporting or experiencing loss.

The second development is open NetNORAD, which is Facebook Open Source solution. It's UDP‑based and it has the capability to tell you that there is a loss between certain source and destination. The last time I checked, it didn't have the capability to pinpoint the loss of the DL3 interface that was experiencing the loss but the fact that you know where the loss is or that you are actually dealing with the loss is very good start, right. So, both of these solutions are quite good step forward.

Now, some more challenges when it comes to active network monitoring came in the format of backbone networks, and so far we have focussed on data centre networks, right. We know that backbone networks are usually built by either segment routing, which is a kind of newer approach, but still predominantly they are built by MPLS, and in labels which networks were MPLS based networks which are utilising advanced auto bandwidth, the problem is that you are dealing with potentially moving target, right. So unfortunately this is a very well known problem, Google and Facebook talked about this, but nobody actually said anything about potential solutions. So, here is one of the probably earliest proposals on how to solve this problem.

We know that MPLS use the IGP as the underlying paths, in order to construct the contained SPF you need to have SPF, so, basically, if you have the SPF, same rules applies as for the data centre, and in the majority of the cases, best IGP path if there is no networking event means that it's a best MPLS path as well. So, the idea here is that if you have multiple POPs that are connected by your MPLS‑enabled network, you basically create this full mesh of probing. And the idea there is that even if your auto bandwidth moves the traffic from one LSP to the other LSP the idea is your underlying check is going to figure that problem out and you are going to know which ALT3 path is experiencing that loss and then you can actually act on that, like, drain the traffic from that and remove it to other paths. So that's the idea here.

And remember, this may not give you 100% visibility, right. But, it's always better to have any visibility than no visibility. So...

Then did we forget about something? We forget about ‑‑ we forgot about the Internet, right. So, what's the problem with the Internet? Right. The problem with the Internet comes in multiple formats. So we have this publicly shared infrastructure that nobody has a clear control over, right. We all ‑‑ in this room, we have the majority of the people that are operating it, but nobody has full control. So, as part of that, we are using ‑‑ we are basically experiencing packet loss, latency, jitter, BGP, advertisements and withdrawals, prefix hijacks, stuff like that. So those are the potential problems of the Internet. How do you monitor that experience as well? The other thing is, if you think about it, from your own ASN perspective, it's not that you have ‑‑ it's not that you have the, a lot of opportunities to control. Using BGP traffic engineering and for example RFC 1998 as part of which you can use communities to influence the traffic with your adjacent or transit provider, that only means that you are controlling up to three hops away. So, you are basically out of control. So monitoring of the Internet performance is crucial.

So, what are the solutions? There are some commercially available solutions, but also, you are ‑‑ you have the possibility to use the traditional troubleshooting set of tools. But unfortunately, in both of these cases, you are still reactive. Right. Here I need to give a prospective to the RIPE team which basically has the RIS project, which is feeding the data that gives you data points in like two seconds, which is most granular data out there, so huge prospective to RIPE team.

So in the end I just wanted to speak quickly about the conclusions.

Learn how to code. I honestly believe that the engineering or network engineering industry is basically rapidly changing, and because of this reason, I switched jobs a few times. And it's going to be a required skill to basically not only deploy solutions, which it obviously does help with, but it's going to give you the capability to build solutions, to actively monitor your networks.

The next one is to utilise research papers on data centre and backbone design from the companies. And this is a big one. You shouldn't be learning on your own mistakes. Learn on someone else's mistakes.

Utilise both active and passive network monitoring and monitor performance of your Internet paths as if life of your packets. And patience of your customers depends on it. And don't stop there. Lately, there is this trend as part of which network SREs tend to blame network engineering for oh it's a network problem, right. And then network engineering people come back and say like, no, no, no, no and I'm going to prove you wrong. Don't do that. Just adopt some holistic approach as part of which you can have both network visibility and application level monitoring on the same side, which is going to help you to make the deterministic decisions or where the problem actually might be.

Thank you very much.

(Applause)

CHAIR: Thank you, Kemal. Any questions? It's your chance to ask him now. Okay. Then thank you very much. I think we can find you around anyway, if people have some questions.

The next lot is the lightning talks slot. So please come up. Our first speaker is Melanie, and she will talk to us on how to inspire customers with women in tech. Just a quick reminder on how lightning talks work, we have the next 30 minutes three speakers who will give a short presentation, pitch a new idea that will make you think about something and you are very, very encouraged to ask questions afterwards

MELANIE BUCK: Let's talk about something different now. And that's about women in tech. I am Melanie, I work for GoDaddy and I am the head of ‑‑ I am the chief of staff and head of [Pierre], and today I will tell you a bit about how this topic women in tech came into my life and how we inspire our customers today with this topic.

Just quickly about GoDaddy. So GoDaddy is an international company. We have 8,000 employees, headquarter is in Arizona in the US and in [Amir] we have about 1,200 employees. And in our goal is to make small and medium customers, businesses, online successful. So we have websites, domains, everything you need to be online successful.

And here in [Amir] we have about around 16 locations and yeah, 1,200 employees. Just quickly.

Women in tech, so I think every big company today has such an initiative. So, I remember many years ago I didn't know about that, but today everybody has it. And I really think it's important, because we need to make a difference and we need to have more diversity in the tech industry. And so the goal for GoDaddy, is really to raise the awareness and to raise the discussions around this in this industry. And to make GoDaddy as a very good working place for women as well, so not only for men. And that's our goal to really change something and change the world.

And now to my personal story about how this came to my life.

As I just said many years ago, I didn't know so much about this topic and about one‑and‑a‑half years ago I suddenly received an e‑mail from our former head of Amir, which I know for six years now, and he said an e‑mail to somebody from the US in our team and copied me and they said yes, sure, Melanie is happy to launch the initiative in [Amir]. I shout interesting. He didn't tell me before. And so I thought, yeah, let's have a look at it and what it is about. So because I didn't know much about it and so I started my first research, but to be honest, my very first thoughts were a bit about sceptic, and I thought, do I want to be a key driver for such a topic and do we really need this, so hey I made my way and other people also made her way, other women in the company. And if I do this, am I feminist and is this something bad or is it something good? So there were a lot of thoughts in my head and a lot of doubts to be honest. Then I talked to some people, also to the women in the e‑mail, which was Gail, and she was the president of GoDaddy women in tech in the US and she told me what they are doing, what they are standing more and why we need this. And then I thought okay, yeah, she is right, we don't have so many female technicians and we don't have so many female in laid err ship positions. Then I thought okay, let's just do it. Fine for me. And then there were three other ladies, who also joined me launching this, so we had somebody in Romania, somebody in the UK and two in Germany and we just started it.

And at the beginning of course it was about defining our goals, what do we want to do? Why is this important? We had a big launch events, invited all the employees, everybody to come and talk about it. And then the four of us realised, hey, 16 locations, so many people, four of us is not enough. And then we implemented five leaders and here you can see many of them that was on the conference last year in Amsterdam, and so during the year we learned a lot. First, we learned, ,yeah, we need this. The second learning was, okay, four of us is not enough. We need to expand it. And then we also found out you can see this women here, what is very important from my point of view, this shows that it's not only about women, it's about women and men. Because we can only make a difference together. It doesn't make sense to only have women in tech in the end. And so we need both to have to have a diverse team, to have different ideas, different approaches to discuss topics and to approach topics.

And what inspired me this year a lot was that we even inspire our customers. At the beginning of this year, I came in touch with Susie, and Susie is a GoDaddy customer, and she comes from the US, and moved to Europe, to Germany last year, and she was a GoDaddy customer already, and in Germany she wanted to start a new business which was about, or which is about self‑defence for women. So it's really to strengthen the women, to make them stronger and to make them believe in themselves and this she thought okay I need the right partner to do this. And I'm not sure if you know GoDaddy from many years ago, and if you ever saw the advertisements about GoDaddy. Check it out on YouTube if you would like to, and that was not the best day how the women were shown there. And it was a bit about sexism and all the advertisers not so well, and that's what she had in mind. And there she thought, hey, this can't be my right partner to do this business. But luckily, she did another research and found out that we changed completely. And today, we have this initiative today with a new CEO in the company, we feel totally different about it, and we care about it and we want to have more women and we want to have a respect together, and so, she decided, hey, I like GoDaddy and I like the services and I also like the values and we share the same values. And that really inspired me. And she was even at our first anniversary of GoDaddy women in tech, and celebrated together with us.

And I think that's pretty cool. And so here you can see that it's not only about prices or about performance or about quality. Who the customer chooses, so it's about much more today. Also about more, but also about stuff like women in technology and I'm so happy today that I launched it and that I started it, that I'm now part of something bigger.

And here just some key learnings from the first year, which I now part of it. So, the first very big thing is it's about diversity. So we need both, as I already said. We need women. We need men, we need everything just to have a diverse team to have many colourful ideas and to be successful.

Women in tech also improved a lot the reputation. So if you compare it to the past, as I just said, check it out, so it's a big difference.

It also attracts great customers and also attracts great employees which is important because today it's not easy to find good people, as you all might know in the IT, and so you really attract great people with us.

And yes I'm doing this for a year now besides my normal job and it's a lot of effort, it takes a lot of time but I'm still sure it's worth it and it's important. And so, yeah, I am happy to be here today as well to talk about it, to really raise the awareness for the topic, and to talk about it.

And that's it already from my side. So I'm around the whole week here. I'm also, tomorrow, at the women in tech lunch as well and to discuss with many people about this topic, what we are doing and what we want to achieve, and if you have any questions, just reach out to me and I'm happy to talk.

(Applause)

CHAIR: Questions? We have plenty of time. Okay, I think you covered it. Welcome our next speaker, Christopher from RIPE, who will tell us how to measure DNS without breaking anything or everything.

CHRISTOPHER AMIN: Thank you. Ill tell you something like that. So it's about measuring DNS specifically with RIPE Atlas and, so I'm a software developer for the NCC, I work on RIPE Atlas. And we try to be safe when we develop RIPE Atlas, safety is very important for us at the NCC, we don't just serve, we also care. So, with RIPE Atlas, measurements can be scheduled from any probe in the system to potentially any target anywhere on the Internet. And this raises kind of safety concerns, so, the biggest concern is we don't want ‑‑ if someone is so kind as to volunteer to be a probe host for us, than the probe is going to be doing random measurements to all different things around the world which the the host doesn't know about. We don't want the probe to be doing something which is going to get the probe host in trouble with their government, with their boss, with their wife, I don't know, their life, so we want to make sure that basically they can leave this thing plugged in, it's going to do stuff which is going to help other people, researchers, engineers, the Internet in general but they are not going to get in trouble.

The other size of the safety is we want to be good citizens. We want RIPE Atlas not to cause damage to the targeted systems. So we don't want the probe hosts to be doing a denial of service attack on a random service. We have lots of protections in place for that.

These are ‑‑ so we have various proactive protections in place, limits, rate limits, we stop what can be measured and there is a restricted set of measurements which can be performed, and we are also very reactive, so if there is an issue, we react quite quickly and we will shut it down, we can shut it down, block users, things like that.

So we have measurements of different types. One type of measurement is DNS measurements, RIPE Atlas has had DNS measurements from close to the beginning of the project. It allows you to do a DNS lookup using either the locally configured recursive resolvers or a specific name server somewhere on the Internet. There is no restriction on which service can be targeted or queried, but there are rate limits. So you can't hammer one particular name server with all of the probes in RIPE Atlas, and we do support DNS over TLS. So we also have HTTP measurements and including ACT PS measurements which allows you to fetch or it allows to you instruct a set of probes to fetch a single ACT resource. You don't actually get given the body in your measurement result but you do get statistics, you get the status code you get the size of the response, things like that.

There's been a concern sort of from the beginning that because this allows basically the user who schedules the measurement to ask for any resource anywhere on the Internet, it could get a probe host in trouble, because you could ask for something which is illegal in their jurisdiction and they are in trouble and, hey, they didn't do anything.

So, we have quite a restrictive white list on the targets which can be requested with ACTP, if you schedule a ping measurement, a traceroute measurement you can target anywhere in the world. But, for HTTP measurements, you can only request a single resource from the RIPE Atlas anchors, one of the RIPE Atlas anchors, and it's predetermined response and we kind of know it's safe because if it's illegal to query a RIPE Atlas anchor in your jurisdiction you probably shouldn't host a hepatitis probe.

Trusted users can bypass this white list, so, if someone gets in touch with us and says we have got a research project and we we would like to target various HTTP servers and that's well defined, we understand that's safe, we can work with them and allow that.

So we have got DNS and HTTPS, but what we don't support is bringing the two together. So DNS over HTTPS clearly is being used more and more. So, we could support it with RIPE Atlas, of course. But it has a lot of the same problems as both of the other measurement types which I discussed, especially the HTTPS measurement type because it's not readily distinguishable from a HTTPS query. So essentially we have the same concerns. It would be basically pointless to have a white list of targets that was, say, the RIPE Atlas anchors, because then you would only be testing DNS over HTTPS from RIPE Atlas probes to RIPE Atlas anger servers, so it's not really testing anything. Other kinds of white list could be possible, but that would be something to discuss.

I'll just add that it's already possible with RIPE Atlas to fetch a TLS certificate on a given port, including port 443, but there's no TLS handshake so we already go part of the way to allow arbitrary requests, but, I mean, it should be detectable that it's not a request, it's purely just pulling down a certificate.

So, on the second bullet point on preventing harm to the targeted systems, RIPE Atlas supports DNS, it supports EDNS options. They are on the right there. These are the API parameters of the options that we do currently support. Not all of the documented codes are supported. So not everything that you can find in the RFCs are supported. So, we rely on people to tell us what we should add, basically. And this kind of ‑‑ this was kind of inconvenient for people who came to us for the flag day and said, hey, it would be really great if we could test the flag day servers with RIPE Atlas, and we said well that's cool but we don't currently have support for a couple of the DNS options, which we have now added but it wasn't in time. More broadly, we don't have any support at all for undefined values, EDNS undefined values which really limits new and experimental features, like testing those with RIPE Atlas. So, if you have a new feature which isn't recorded, which isn't supported by Atlas and maybe is not even documented yet, you can't test it.

So, the reason that I'm here is I want to kind of start a discussion to get some feedback and ask about these things. Ask the community widely, should RIPE Atlas support DNS over HTTPS full stop? If so, how can we make it safe for probe hosts. If we work out some amazing system which makes HTTPS ‑‑ DNS over HTTPS safe, could we then backport that to ACTPS, to allow people to to do HTTP measurements to target in a way that's not going to compromise people. Should RIPE Atlas support currently undefined EDNS options in could that be abused? So, in theory, if you send an EDNS option in a DNS query, if the DNS server does not understand it, it should ignore it, right. That's what should happen. So you could say that allowing these queries should be safe, but that doesn't mean that it is. So, could we ‑‑ if we did allow supporting undefined options, is there anything extra that we could do to make it safer for servers? So, there is the RIPE Atlas mailing list where I think we can discuss this and get some more feedback. Me, I have an e‑mail address and I'm also a human, so you can talk to me here. That's me done.

Questions?

CHAIR: Thank you.

CHAIR: We have enough time for all of your questions, but still, please be brief. We start here.

AUDIENCE SPEAKER: Fascinating. I figured it would be good to be close to the mike but I didn't envisage this. Lars Liman from NetNod. I understand the problems. I love what you do, it's helpful. Is it possible to approach this from the other side by, when you enable your probe as a probe host, that you kind of have check boxes to say that you can use this probe for these and these purposes but please be careful this. I understand that puts a burden on the probe host which might be, you know, I wouldn't say too complicated but at least it's a step of complexity that you want to avoid but maybe that can help you in some situations, just ‑‑ maybe I have thought of it already recollect you probably did. Thanks.

CHRISTOPHER AMIN: Yeah, that's something that we considered but we have this kind of dynamic where you you are probably going to have a white list or a black list on one side and a white list and a black list on the other side so the complexity starts to spiral but that could well be part of the answer.

AUDIENCE SPEAKER: Matthijs Mekking. After RIPE Amsterdam I asked for or initiated a request for the EDNS options and I realised we might be short on time for the day. DNS community is working maybe on new flag day and new ideas so I think we should be more on time, involved, maybe RIPE Atlas can help on the next one. Thank you.

CHRISTOPHER AMIN: Yes, thanks. The more time we have, the more time we can implement and support.

AUDIENCE SPEAKER: Dmitry. Well, we have a flag day presentation also so maybe you should attend that, are you guys going to be doing that maybe you should have a GitHub spec on what the probe is doing, something that describes things and people can just do. So, it's more incremental than these things you can do here, but I appreciate you asking they say questions.

AUDIENCE SPEAKER: Roland van Rijswijk, NLnet Labs. On the second one of the EDNS options, as a researcher I found it useful to be able to experiment with things that are maybe not supported out there. I also used to work for an operator and as an operator those things scare me because they can trigger crashes in stuff that I'm running, that is being launched from the Atlas probe so it's not just the Atlas person you are protecting but also the people receiving those requests. I think you should be very careful before you start enabling measurements that may trigger bugs in upstream things. Even though as a researcher I'd love to be able to do that.

As for the DNS over HTTPS, that's a tricky one. And I think you summarised it really well. What you could possibly do with Atlas probes is do some sort of latency measurements to see the performance of the known DOH supporting servers at the moment, once this TRR thing and for the people that don't know that's DOH from your browser and configuring, look up the discussion, it's too much for the mike, once that starts kicking off, it might be useful to do more measurements of DOH and from that point on I think it would be very interesting to be able to measure that. Also, if there is some discovery mechanism that allows to you discover a DOH server that is in your network, because that is one of the benefits of the Atlas probes right now is you can go to a DNS server that is in the network of that probe and for a DOH that would be equally interesting.

CHRISTOPHER AMIN: Yeah. Thank you.

AUDIENCE SPEAKER: Petr Spacek, CZ.NIC, and also DNS for the organiser from the previous year. On the first point, the DNS over HTTPS was specifically designed to be, to look the same as HTTP so the answer has to be the same, either we solve it for HTTPS or we just scratch it and speaking of EDNS options for the previous flag day we were doing Internet‑wide measurements and we bombarded basically everything which spoken on port 53 with all sorts of weird EDNS options and we had exactly zero complaints. So maybe it seems to be safe.

AUDIENCE SPEAKER: You already have a process for allowing trusted users to conduct various experiments that are out of the bounds of what the normal users have access to. I think that's probably the best mechanism to use for things like DOH and the undefined values testing, at least initially. That provides you a known control point where you get an abuse complaint, you can contact them easily and shut the experiment down as needed, and it provides adequate safety I think for the community and you know that you are dealing with a responsible researcher. So, I think that's probably the best mechanism.

CHRISTOPHER AMIN: Thank you.

AUDIENCE SPEAKER: Hi. I am Eleanor from the RIPE NCC and I have a comment from a remote participant. Nick says, it would be great to have DOH support for RIPE Atlas. I suggest you use the public DOH server list as white list.

CHRISTOPHER AMIN: Thank you.

CHAIR: Okay. Thank you. So you will get some insights during the meeting I would think.

(Applause)

Next up is Enno, who is asking two I think very interesting questions, whether 2019 is finally the year for Linux on the desktop? I think we have said that for a couple of years now. And what about v6‑only networks?

ENNO REY: Actually, I'm not going to tackle the first one. This was meant as a kind of joke. As you might know, every year in the beginning there is, on Twitter, something going on like, oh, is 2017, '18, '19, whatever, finally the year for Linux on the desktop? And the same appears like DNSSEC guys and the same appears for IPv6 and I am going to focus on IPv6. As most of you know probably from previous meetings or from other occasions, we at the company where I work, we do a lot of stuff with IPv6, and the background of this talk is twofold.

It's first, many conferences nowadays have SSIDs for v6 only plus NAT 64 usually and they might even do so as their default SSID, for example we do, we run a security conference in Germany called TROOPERS, there since 2016. The default SSID is v6 only and NAT 64 and we don't tell people. As far as I know there is similar approach, there is others like Cisco life or RIPE meetings where there is a dedicated SSID and the background say of these differential DOS it as a default or do it as a dedicated thing, is that a bit of a question like, okay, should we tell people? Is there anything going to break once say your systems get connected with v6 only with NAT 64, and that's the kind of crucial question in such settings. And this crucial question, like, it has been around for many years, many years there has been a the love discussion like well if we do v6 only, I have had heard something is not going to work and there is this block pause or there is some forum there is a remark from people complaining about certain stuff not working.

So this is, in the middle of the debate, is this thing that, as we say in Germany, do things break once ‑‑ and, if so, what breaks, actually? What type of of applications or what type of stuff doesn't work in a v6‑only setting? Which might induce our questions say from the server provider perspective, well, do we care? Should we care? It depends ‑‑ it might depend on the type of offering. If it's a free offering, one might have a stance like, well, I don't really care if things break as long as my operational expenses can be capped down by 10, 20, whatever type of percent. And obviously there might be a question, okay, what could we do? And in this, say, ten‑minute talk, I'm going to talk on the ‑‑ focus on the first one. What breaks? And v6 only the wife networks, it's only for Wi‑Fi, we can focus on the perspective of Wi‑Fi for consumers as opposed to guest Wi‑Fi for corporate use users, and there is a case study project behind this I'm going to provide more details on this in the v6 Working Group on Thursday.

And to answer the question, we built a very simple lab. We put up say a NAT 64 capable Layer3 device, have DNS 64 in this case provided by Unbound, some Wi‑Fi infrastructure, in our case it was Cisco‑based but this doesn't really, it's not of too much importance, I think, for the results.

We looked at several operating systems, both desktop‑based and mobile devices. We looked at certain groups of applications. I'll give some details in a second. And we not only looked at like, say, groups of applications but we tried to define test cases, perform an initial connect, perform certain operations, join a group communication, share pictures, send messages, receive messages, stuff like this, obviously perform tests, if the stuff works, and if not, try to find out why it doesn't work. What's actually happening in the background.

To give you an idea of what was tested. These are two out of six categories, social media and streaming. You can probably see there is, these are very common types of applications, which might be used in the type of setting I described, Wi‑Fi for consumers say in public hot spots. Spots. We looked at certain stuff in the communications area, we looked at games. Again, there is a project behind this and in that project, so, this is I can say, it might be this might be like say students or pupils waiting, spending their waiting time and then they might be tempted to play games and the experience of these games might be important.

And we tried to group this and we tried to group this by operating systems and by test cases.

And what I can already say, I'm going to cover the not so good things in a second, but overall, on the mobile operating system space, pretty much everything worked. So, if you can assume that the user population that is going to use a v6 Wi‑Fi, and you can assume those persons are working with mobile devices, with smartphones, it's a fairly safe bet that pretty much nothing will break‑even in a v6 only plus NAT 64 setting.

In a case of Apple, it seems to have ‑‑ as some of you might know in 2016, they had, they pushed out a mandate, once an application wants to be listed in the app store, it has to be tested in a v6 only setting and a NAT 64 setting, and in case of Android, this might be related to 464 XLat, we have a close look at this, it was just important does it work or not. Most categories, worked out nicely, there is still where some small issues were identified which is games and streaming, and looking at those more closely, one can differentiate between two main scenarios, either the app doesn't work at all. Where the app usually means the app running on a desktop system. This is important for the things we discuss right now. All the problems we observed actually happened on desktop operating systems. Again, nothing, pretty much nothing in the mobile, on the smartphones.

So, two cases: Doesn't work at all versus some things work and others don't. And just to give you a very quick idea of what could be observed. Spotify on desktop operating systems had issues. Looking at network traffic, it turned out one couldn't observe anything. So it seems to be a local problem, it seems to be a problem of, say, opening a socket or in that space. You couldn't see anything. Again, looking at the traffic. And then there was Fortnite. Fortnite was an interesting example. As Fortnite, what one has to know is Fortnite, as many other games, they use a specific, say, engine, what's called an engine, the unreal engine for many, say, functions, and once one wants to play Fortnite, one has to first, say, install or using a certain socket launcher. In some settings this launcher had problems. And later on, once the application is installed and wants to play, it's like ‑‑ I mean I don't play Fortnite, so hopefully it's rather correct what I'm telling you here, but, well, there is, say, one interested kind of meeting space and from there on one decides, okay, let's play a game. As I mentioned, the launcher had some issues. An interesting one is later on, and this is ‑‑ I wouldn't say a typical case, but this is an interesting case, as you could really see on the network, why the stuff fails, at some point an XMPP client is involved and in that one that only performance DNS queries for A records, which obviously in a v6 only and NAT 64 setting don't work. So here we could identify what's the actual cost, which could be helpful for later window communication.

In November, so six months ago, they specifically stated well, we are going to support v6 in a much better way than we have done so far, so I would expect that even those problems that we observed to go away very soon.

So the interim conclusion is in 120 test cases, in five operating systems, not all operating systems supported all applications, all test cases, but we could only identify three which didn't work and those three were on the desktop space. Two applications, again desktop space had some small issues, but, well, this means pretty much everything worked. And pretty much everything worked again on mobile devices. So this is the thing to keep in mind when I mentioned what breaks, one has always to ask the question, okay, what breaks for a specific user population with specific expectations? Say, once you identify Spotify on a desktop doesn't work, this might not be of interest when you say, when it's about providing hot spots for bus stops. At bus stops not many people will sit with, say, a laptop to listen to Spotify as opposed to, say, you provide wireless connectivity for a co‑working space, there it might actually happen and people sit there coding and listening to Spotify, and the same, say, for Fortnite. Once you ‑‑ say, I know that my oldest son plays Fortnite on his Xbox. Once you provide wireless connectivity, you say again, a bus stop or a football stadium, nobody will turn out like an Xbox from a backpack to play Fortnite but once you provide wireless connectivity for a hospital, that's a very different picture. So, this has to be kept in mind.

What will be the next steps of this? We will ‑‑ there were many failure cases. We tried to narrow down what has happened in those failure cases. When the communication is on our list, we haven't done this yet: But what I can say from our lab results is that the defensive position like many IPv6 people had in the last years, like, well, we don't really know. Say reaching out to a vendor, you had a position, v6 support we can't really expect this. Nowadays with this type of results, it's much easier to say all your competitors work, it's just you who has a problem and this problem is related say to a DNS resolution.

We plan to publish the full results very soon. And we will continue this. Next steps will be not just look at, say, typical consumer applications but also say look at VPN compliance. For example, some of you might know that, at Microsoft, there was an effort enabling v6 only for the guest Wi‑Fi and at some point they had a pushback, it turned out that some VPN applications, VPN clients don't really work in that setting. And depending again on your user population, do you want to support this or not or do you need to support this?

So, conclusions:

Thinking about v6 only with NAT 64 makes sense and we have a number of conversations with customers about this.

Testing is always good as testing provides transparency on what actually works or not.

Overall, as for the question that we were asking, I would say yes, 2019, v6 only in Wi‑Fi, you can do this and you shouldn't expect too many problems. So, I would say we are there now. And that's it from my side. Thank you.

(Applause)

AUDIENCE SPEAKER: Jen Linkova. Thank you very much. I love to see positive presentations about IPv6 only. Now I'm not feeling alone. I am just curious, because I have seen some numbers of application tested and so on, what's the number of bugs you reported to vendors?

ENNO REY: To be honest, we are not through that phase yet. But what I can say is, as I mentioned we had a true plus, we had a v6 only and we had a certain system incentivising people to spot if something doesn't work and they got extra points in a specific system and when they provided the feedback to the vendors or in the case of Open Source stuff, to say that the community, and I think in that week, a lot of things have happened. So, there has been some feedback already, but not from our test cases.

AUDIENCE SPEAKER: Because from my experience, it does help to report it and publish the bug, link to the bug so people can add their comments because, as far as I know, nobody fixes anything if users are not complaining.

And a quick second question. What was the reason to test, even test Windows 7 when there is no RDNS, as far as I know?

ENNO REY: That's somewhat related to the programme I mentioned. They expect they will have a user population with all types of systems and they expect some non‑Windows 10 clients there.

CHAIR: Just a quick reminder to be brief, we want to release all people on time.

AUDIENCE SPEAKER: Ondrej Caletka, CESNET. Thank you for your work. I would just like to stress out here publicly that there is a NAT 64 network here on this conference and people should use it because on Friday, in technical board, we always see the percent of usage and it's like one digit percentage and I think if every mobile device works on NAT 64, people should now disconnect from the RIPE and should connect to the NAT 64 and use it.

ENNO REY: You should. Thank you.

AUDIENCE SPEAKER: I have a comment regarding the presentation. I am for IPv6 only, but unfortunately, like it or not, despite the tested, there is a lot of obligations in other devices not just Apple, that are not IPv6 capable or the one, the game that you mention, I think the right strategy is to have IPv6 only in the one link, but we should keep, and this is for both residential and enterprise customers, dual stack, and there is a very simple way to do that which is still using NAT 64 and CLAT in the CP, so ‑‑ the reason two weeks ago RFC 85, which is actually telling, please include in the CPs this support. I think that's the right way to go, because we cannot ask the people to throw away their old IPv4 in the cameras or whatever devices or applications like accounting applications that will not get, unfortunately, IPv6 any soon. So, I think that's the way to go and maybe you want to try and parallel that situation, not just IPv4, but ‑‑ sorry, not just IPv6 only but also providing a CLAT in the network where you are setting to check the differences.

ENNO REY: Yes. I can confirm that we can expect ‑‑

CHAIR: I think we can do one more question ‑‑

ENNO REY: The non‑mobile space will be different.

AUDIENCE SPEAKER: Nicolai Leymann, Deutsche Telekom. The first question is ‑‑ the first question is the mobile devices, whether on the mobile network or on wireless?

ENNO REY: On wireless.

AUDIENCE SPEAKER: Okay, second question: The mobile devices, were they DNS 64 only or have you had any, like, local mechanism to directly translate IPv4 addresses into v6 addresses?

ENNO REY: I mean, I can ‑‑ okay, there is, on one side, there is what we tested exactly. I mean, to the best of my knowledge, Apple doesn't ‑‑ okay, they have something they call CLAT. Not many people, or at least I don't know exactly how that works. The Android version which we tested definitely has 46 for XLat so probably yes there was something. To be fully honest, I'm not sure which role they played for the types of applications that we tested. My assumption, but this is speculation, would be pretty much all of this stuff works even without a local mechanism. But this is speculation.

CHAIR: Maybe take the rest off line. Thank you.

(Applause)

So, stay tuned for some housekeeping information. Now we have a break for half an hour. Then in this room, there will be be the BCOP taskforce, in the side room there will be the BoF on RIPE's future, future as a community, and, for all the newcomers, you are invited to join us downstairs at the newcomers' welcome drinks, where you can interact with PC members, the Programme Committee members of the board and ask questions. And I remind you to rate the talks and until tomorrow, 3:30, you are able to nominate yourself or a friend you ideally asked before for the two open slots on the Programme Committee. Thank you.


LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.