Archives

Connect Working Group
RIPE 79
16 October 2019
11 a.m.

REMCO VAN MOOK: All right, if you can find your seats please. Good morning everyone. Welcome to your second most favourite Working Group session of the week, the Connect Working Group. My name is Remco; I am going to be chairing this event together with my fantastic co‑chairs, Will and Florence. Let's see what else we have. Alex in front here is going to be our scribe. So be merciful for him, and Gergana is going to be monitoring chat.

Household: Rate the talks. I'm told I need to mention that. Also, this room has six emergency exhibits, two at the front, and four in the back. And this may not be your most favourite Working Group but we have the best slides as you can see.

So, on the agenda ‑‑ we have the best slides, as I was saying. I am marvelling at what my dear colleagues have done to me here. There is an opening. That's this. There is a scribe appointment. We have done that. There is an agenda, that's this, I guess. Let's see what else did we have to do in terms of housekeeping?

Yes, we actually published the minutes from last time this time, and I trust you have all read them, so any comments on the minutes from the last session? Going once, twice... no, so I would like to thank the RIPE NCC scribes for giving us these lovely carefully drafted minutes.

CHAIR: My part is going to be simple. I will just ask you to stretch up a bit and we will do a show of hands. So, like every knows who is part of what in the community.

So I would like all the content providers to raise their hands so we see them. Okay. So now you can spot them.

Any IXPs? Any ISPs? And so any RACI, academia people? Okay. Thank you.
And the newcomers, just for the small ‑‑ okay, still a good amount of people. And who are the old farts? Okay, thank you.

If you want to participate anonymously you can also use the chat on the web page of the meeting, and then after that, I think we can introduce our first speaker, Which is Marcin, and he is going to talk to us about dismantle link operational practices of BGP blackholing at IXPs.

MARCIN NAWROCKI: And the stage is yours. Thank you for the introduction, welcome to my very first RIPE talk about BGP blowing at IXPs, this is joint work with these people. DDoS attacks remain a major threat to the Internet structure. A lot of people in this room also suffer so we should speak about solutions and BGP blackholing is one of those solutions. And the basic idea is quite simple. Discard old traffic early in the network. So, blackholing so believed to be an effective measure to mitigate DDoS and our research project actually questions both. Is it really an effective measure and is it only used to mitigate DDoS attacks? So, my talk is structured as follows: We have three sections, we will have a quick recap about blackholing. Then we will talk about the current deployment status and how blackholing is effective at IXPs and in the end we will have a quick outlook for future enhancements.

So, how does BGP blackholing work?

Some IXPs of a route server to multiplex BGP signals. Now think of a web server serving some important web content and what happens during an attack? Well, the victim receives large volumes of DDoS traffic. And still a little bit of legitimate traffic. If the victim AS realises that it is under attack it blackholes the IP address of the server by announcing the prefix covering the web server. Then the route server propagates the signal to all the peers at the IXP and after acceptance the traffic is dropped or forwarded to the blackhole, not only the attack traffic is adopted by also the legitimate traffic which ends up at collateral damage.

But, that's the simple case, that's the theory. In the real world BGP policies apply. What I mean by that is is peer 2 might have a policy which leads to not accepting the blackholing announcement. So, this ends up in the following situation:
Peer 2 still forwards all the attack traffic to the web server and the other peers at the IXP actually really drop the traffic. So, that's a situation we wanted to measure and see how much traffic is really dropped.

So, how well deployed is BGP blackholing in the real world? Quick overview about our measurement approach. So, we did not want to study single cases. We wanted to present a holistic view about what happens during one hundred days at our IXP. We took all the related data, no exceptions. We looked at BGP data, so all the BGP signals from the route servers and our second data source is basically the flow data all simple packets from the public switch fabric that, so all packets that are related to prefixes that have been blackholed at least once.

As we have two data sources, we verify that the time stamps on each of the data sources are actually in sync. So we have no artifacts to do some time differences.

So, that was our first research questions, to all IXP members except blackholing announcements? Again, this is necessary for an effective DDoS mitigation.

To find out we grouped the blackholed by prefix length and for each blackhole in a group we then calculated how much of the traffic was really dropped and plotted the CDF. We started with the /24 and we actually observed quite a good drop rate. Perfect mitigation in this plot is a vertical line of 100%. So, /24 is quite good.

However, this is a really large prefix to be blackholed, which might blackholes which are not under attack. So, we also done this for the/ 32 prefix length which drops packets precisely for one host. And for this actually, the drop rates are very variable so observed the mean drop rate around 50%. So, the result here is that successful blackholing highly depends on the prefix length that is actually announced.

More interestingly, 99 percent of the traffic that should have been blackholed is actually already covered by the /42 prefixes, so although this is the most important prefix length, it's basically a conflict and not really effective.

Our second research question: How fast do IXP members react to DDoS events? If they react very slow, blackholing loses a lot of its effect, as a lot of damage has already been done.

So, to answer this question, we had to overcome one measurement challenge. Imagine attack event which starts about the attack traffic increases and stops where it decreases. If we check what happens on the control plane, we see that blackholing announcement here is marked with a black circle and the withdrawal with a white one and not a single blackhole is announced we actually observed multiple blackholes. So, that's because blackholes are withdrawn to check if the attack is still ongoing and if it is, it is reannounced again.

So, all those blackholes are not independent. They are all part of the same attack event and that's why we must not treat them independently. We have to cluster them somehow.

And that's why we clustered the blackholes into blackhole events, which is basically a sequence of blackholes for the same prefix of a single attack event.

It starts with the announcement of the first blackhole and it ends with the last withdrawal of the last blackhole.

And if we know what the blackhole event is, we can ask what actually triggered the blackhole event.

To answer this question, we looked at 72 hours before each blackhole event. We used the simple sliding window mechanism which monitors multiple features for sudden bursts and peaks and for each of these features we calculated exponentially weighted moving average and checked for extreme deviations.

In total we modern five features we we expect to change during various types of DDoS attacks such as amplification attacks, SYN attacks and so on.

So, that's the map that ‑‑ with this heat map shows all anomalies that we found before a corresponding blackhole event. The X axis shows the time before blackholing event. The Y axis shows how many features were actually affected and the colour map basically just counts the anomalies. And overall, nothing happens up to an hour before blackholing events. But then, however, the number of anomalies increases rapidly in the last ten minutes before the blackholing event.

So, what do we learn from that? Well, mitigation is triggered automatically, and since it's automatic, it's probably effective with respect to time.

Let's talk about fine‑grained filtering. It turns out that a lot of DDoS victims are actually clients such as professional gamers, Twitter streamers and so on. So this means that white‑listing of regular patterns such as HTTP traffic to web servers is usually not an option. However fine‑grained blacklisting of traffic is an option because most of the attacks is quite simple, they only utilise one, two or three investigators such as NTP or DNS.

How we got all those results, we present next week at IMC, but you can also read it in our paper.

I would like to conclude my talk with operational advice.

So, first of all, check your BGP policies. Accept more specific prefixes in the case of blackholing.

Second, check your routing tables for blackholing zombies. We call blackholing zombies blackholes for which the data plane activities do not justify the blackhole. And contact your peers to understand what their use case for blackholing actually is, because our data suggests that there are more use cases for blackholing than only DDoS mitigation.

And the last thing, consider fine‑grained filtering. Most of the text that we observed are still not very complex, and the blacklisting of a few attack vectors can be very effective.

Thank you.

REMCO VAN MOOK: All right. Any questions for Marcin?

AUDIENCE SPEAKER: Wolfgang Tremmel. It's rather a question to the audience to the second point. What we noticed is quite a large number of these zombies, so I would like to have a show of hands, is anybody using blackholing to block traffic to unused IP space?

REMCO VAN MOOK: That's a very good question. Is anyone using realtime blackholing features in BGP to filter unused address space? It's very quiet, that might not mean anything by the way.

AUDIENCE SPEAKER: Also, can you put back the slide of the operational advice please? Accepting is there 22s, I'd really like to emphasise to the audience, you do not need to accept any /32, the blackholing prefixes they have a specific next hop at our exchanges. At other exchanges the criteria might be different, please talk to your exchange operator about that.

MARCIN NAWROCKI: There is also a specific community about that.

REMCO VAN MOOK: It has a very good number. All right. Thank you very much Marcin.

(Applause)

Next up if I could decipher my slides was Vladislav.

VLADISLAV BIDIKOV: Hello everyone, I am here to present an interesting mix of how you can combine an IXP and some commercial CDN in order to do something compared to elections in the country. Macedonia is very famous for several things, and one of those things is getting election for something government or municipality in around two years. And usually that means that we have this body which has to use a special software to do the whole process, and in the end, we have our results live every five minutes.

Usually, as you have expected, we are not very well connected between operators. Also, we are not very connected to the world, so, modern DDoS attacks have a serious problem and usually after 7 p.m. when the poling booths close, this slide which should be available is generally wiped out. This year, because of a mix of things, the faculty of computer science was asked to help this institution to first virtualise their servers in order to get the new hardware and then try to find a way, how we can help them survive the presidential elections.

Luckily, as probably some of, you know, for years I am trying to establish an Internet Exchange. That was established finally with the help of some great people last year and we had some peers there, so we had this idea that we can finally use this momentum and show to the providers how Internet Exchange can be important.

The idea was that we will use the IXP, or the change to allow unlimited bandwidth to the small operators feeding this election system in the back end, and seeing as we have good connectivity here we should have no problem. And while the poling results are coming through this isolated environment, we will try to use a CDN in this case, CloudFlare, to show the public site to everyone and in that case try to survive the attacks.

In the process, as I mentioned, we also managed to virtualise the whole infrastructure, so it was quite an easy attacks to spin in new versions of this specific system for the elections.

Of course, we had to contact CloudFlare, it was interesting since they already have this project which they support initiative like this, but it was only for the States. Luckily some Twitter pings and some wonderful communication late at night because of US time we managed to get them to get on board 24 hours, let's say three days before the legacy block. So that was on Thursday and the elections were on Sunday.

We started to experiment right away and we see a terrible problem and actually the problem was not with the IXP idea, with the CloudFlare, but with the application itself. Because, we saw that we had to work with the vendor producing this application, which is quite famous in the region, that we need to modernise it. Why? Because the application was monolithic, so at the end, the results which were generated every five minutes were mostly changing all the time, and it was based on a concept of one application server for everything and one database server for everything. So, our idea of feeding it via the IXP for one part and showing the results via a CDN was not right, and also, the way the results were billed, I showed it was uncacheable because everything changed every five minutes.

We took into consideration how CloudFlare caches things. We approached the guys for this application, showed to them what do we expect and how do we plan. They did a new version, and they decided to split all the data which before that was around 30 megabits of HTML files into very small JSON files, because since at the end I can give you the URL, but it allows you to see permanent municipality and that allowed the split the whole application in 4,000 JSON files and JSON is very well cached, secondly we can build Openflow the JSON files for which we have changes so the update process will be easier. So even five minute upgrades were not a problem and we can go even with a minute updates if needed and at the end it allows us to wonderfuley cache this thing.

And we did some tests on Friday. We did a dry run on Saturday and at 7 a.m. on Sunday we expect to see what the happen.

These are the summary results from the CloudFlare dash boards. The result as you can see is very impressive. Especially since 83 percent was cash content, the JSON files. And you can see on the pie he end that the big blue thing is actually most of the JSON files.

Of course, when you have a good thing, you see that most of the things were JSONs and end responses. End responses is actually a trick we use to answer for files which are not changed. In that way we don't make the browser to go over and fetch cache JSONs, you just show the local cache in your browser which is a new trick the team learned.

As you can see based on this mainly the interest was in country, as expected.

This is what happened in 7 p.m. So this is the real traffic. And in the same time we started to get constant e‑mails from the CloudFlare system of, and logs were filling up like 10,000 a minute with DDoS HTML attacks all the time but the impressive part is the system was running perfectly. And you can see there was RIPE interest, data was coming in and the system was stable.

This is the total response on how things were running and this is a zoom up on how the interest got right after we closed the election booths at 7 p.m. and in the middle of the night around let's say 2 a.m., we went to slip, everything was fine, 90% of the results were in, the site was online, and everything worked perfectly. The interesting part was that this was the only year when the official results page was used in all the media, all the portals etc., so there was no need for any other information systems to be built because the system was perfectly stable.

At the end, the results were remarkable and they were noted by the election committee. So the first time in twelve election cycles, the system ‑‑ the results system was online 100%. Data was going over the IXP in instance without any problem because boast of the operators they were used for this were already at the exchange. All DDoS attacks which we know there were, because we got alarms, were successfully stopped. In the end the remarkable thing was that the amount, because of the JSON file, the amount of type we needed to the Internet for the results to be feeded into CloudFlare was a very small number, around 100 megabits, instead of the 2 or 3 gigabit we needed when the application was monolithic, and this really got into trusting the election system and explaining that the system works and you can combine technologies to really run, and finally the state election committee was irrelevant at the end.

This is our future work and then we can have some questions.

So, definitely, after this, we need to get more ISPs to come to the IXP, because this could be useful for other systems, especially since we host to national health and educational system. We need to get more CDNs into the IXP, some of you are here today. So, maybe we should talk about it. And of course, we found out that root DNS servers are a problem in the country so we are also working with some of the people to get the DNS servers ready and IXP Manager will be available if there was a few people and organisations willing to support us in the past, that goes to ISOC, and Flexoptix and of course without the wonders of software of IXP Manager, which this weekend I managed to upgrade it to the latest 0.3.0 version.

So any questions, comments, I am here to answer:

AUDIENCE SPEAKER: How is this received within Macedonia? Friso for instance, Rabo bank.

SPEAKER: Well this was very well received and we got this, now we are the guys who know how to fight DDoS attacks so the faculty was approached for several other systems to try to show this knowledge and improve the connectivity.

AUDIENCE SPEAKER: Michael Richardson. So, if a mass don't yen went to the web browser and pulled up the results page, and went out through their ISP, did that go into your IXP and then to CloudFlare?

MICHAEL RICHARDSON: Networks directly to CloudFlare.

AUDIENCE SPEAKER: So they left the country, went to CloudFlare, got the results.

MICHAEL RICHARDSON: Had to go to Sofia, or Belgrade is the closest POP in CloudFlare we have.

AUDIENCE SPEAKER: You would have benefited from a CloudFlare POP locally as well. Just trying to understand that. Thank you.

CHAIR: All right. Any other question? Then I guess that's it. Thank you.

(Applause)

So, please rate the presentation. It's important, it's important for our students of RACI to have feedback, so please provide feedback and rate the presentations.

Now let's jump to the next presentation, which is a presentation from Thomas King ‑‑ or Sebastian who is replacing him right now to talk about inter‑connection.

SPEAKER: I am actually the head of software, so, whatever we come up with, we are the guys to have to implement it.

So, especially when you have this kind of an API that of course changes a lot, the companies that are implementing it, right.

So Thomas says Hi to all of you, and but I'm not just speaking here, just simply on behalf of DE‑CIX. There were some other people involved, three IXPs, AMS‑IX, DE‑CIX and Linx who contributed to this IX API.

So, first thing is, of course, what's the motivation for us actually doing that? So, of course, others have been doing this before creating an API for an Internet Exchange. However, the participants see a value in actually contributing to a standard. So, also, on the IXPs have become more complex or the interconnection as a whole has become for complex, and of course, everything should become faster and therefore, yeah, an API can help us with that.

And of course, we're not the same any more so the IXPs have changed quite a lot. So, if you take a look at this, so, IXPs have been going to different areas, to different regions of the world. They have offered ‑‑ so they have started to offer different services, like Cloud connectivity, this remote peering, so they are also connecting IXPs with each other, and connecting VLANs with participants and of course paid peering is something that's being addressed by IXPs. And of course we want to make all of this a little bit less error prone. We want to make all of that of course faster. And of course, that should happen 24/7 and not at the time when we open our doors.

And especially when we have that manual provisioning that we have been doing before. We see right away that when we deliver the interconnection services immediately, that will change the whole thing completely, because new things and when you see people using an API, it's a different thing than just the thing that you have put into the API that you have been doing manually before. So using an API does not necessarily mean everything is faster, but it's a new thing, because you can change the capacity over time. You can make configurations over time, and that is ‑‑ the user experience that the implementers have completely in their hands. And of course, the ‑‑ by doing that, we also increase the quality of the whole ecosystem because that standard means that we build more similar IXPs, right? And that of course is for the benefit of everybody.

I don't want to go into more details at this point. I think more important is the demo that we're having.

First, I'd like to explain to you again what is an IXP? Right. This is overly simplistic view of what an IXP is. Basically you have this Layer2 network. It's reachableable from multiple data centres. You have a route server or two. And of course you can, if you are in one of those data centres, you have your router, you are connected to the IXP network, and also you can come over reseller. That's a simplistic view of the world. That's what we put into an API.

Of course, we also have these remote networks, the remote Layer2 networks, we can connect to them as well. And here you can see those resellers are connecting inside the data centre and putting the data for you to your, wherever you are, right.

So, let's see what happens when we are ordering such a service. So, first of all, we need to reduce some operational context? So who are we calling if anything goes on or who are we calling to set up this stuff? We need things like implementation contact, building contact, legal information, so all this stuff needs to be created first.

And then we're creating Mac addresses, or registering Mac addresses. So we are registering the Mac addresses of the customer routers and because the IXPs use them to create Layer2 filters, and then in case of the reseller, we need to create a VLAN because that's how the customers are multiplexed, right.

Then as a next step we need to configure the route server configuration, for that purpose we need an AS number, we need an IP address, AS‑SETs and so on.

So, and this is exactly what we put into the API calls. And as you can see there, it looks like a rest API and that's exactly what it is. We have posting to contacts, posting to Mac address, Max posting to network service configuration and posting to network feature configs. Where the VLAN configuration is actually posting to network feature configuration and that's doing that.

So, could you please switch over to the video? So we really like, in order to work with such an API, we really like using this post man tool, which is like an front end for us to directly interact with the API without having the need to actually programme anything on the client side but we can just use the API right away. It's a bit more convenient than using a Curl, right.

So what you see here is our environment variables that we can use for the calls we're having. You can see here we have an end point defined at host and API key and a secret. The first thing we have to do is of course authenticate and create a JWT token that we use for sub sequence requests. So this has already happened. You can see we have posted to the Auth token and this token has been applied to the environment. And all sub sequence requests will have this JWT token.

The next thing is we are listing the customers, so the reseller we have pulled up can only see themselves. They do not have a parent. You can see here it's null and they have an ID. Parent means they are not having everybody reselling services to them.

And this reseller is having the customer ID 404, and as you see it was also added to the environment. So now this reseller is creating a new customer object. So they need to provide this parent relation to themselves and here as we have an example we have this any name corporation. They got a new ID and the parent of course refers to the reseller. And then we get this tree structure. You see the idea is applied to the environment. And now we're taking a look at which operational context are there yet. None. Empty. And next we're creating a NOC content. We need information about e‑mail address, phone number. We're creating an implementation contact. That's also a name, phone number, e‑mail. Peering contact, that's information that is going shared between all participants, so users of IXPs probably already know this. Legal contact, that's the stuff ‑‑ that's the information that we need in order to make business at all. And of course we need to have billing contact because we want to send the invoice somewhere.

And now when you take a look at which operational contacts are there, we can see all of them have been created. All of them have IDs in this address book shows us everything is here.

So now we're taking a look at the demarcation, at the time, currently the demarks that we're seeing in there are pre‑provisions so the physical thing of connecting with the exchange is something that is being done out of band currently. So, this is why this end point is read only, but year, as we go forward, we also add creation of physical infrastructure.

Here we're already at listing all the facilities, and the next step is creating the Mac addresses. So you can see here we have a Mac address in there when we click on send, we see that this is being registered with an ID, this Mac ID. And the network services that we're now listing, they are showing us the offers that the IXPs actually have. So here we see this is the standard peering services in the different locations at DE‑CIX, and the D X DPP S 1 means it's a Frankfurt peering LAN. So we have that here in the environment variable called network service.

And here is the important part, because now we're creating the network server's config which means we're creating the VLAN config and we need to provide all the things that we have created before. So, the context, ASNs, Macs, the capacity, we can choose a VLAN and the managing and the consuming customers. So here the managing customer is the reseller, the consuming customer is the one we're creating this connection on behalf of.

So I am clicking on send, and we see here all of this thing is being created, and the ‑‑ this is real provisioning happening in the background, so, now the switches are configured, and you see we already have an ID which refers to an IP address, and we're querying the list of IP addresses we can find this ID in that list. I could also have used a filter, filtered by that ID, but ‑‑ I did it this way. You see here is the IP address that we just created by sending the post to the API.

And the network features are there for further configuration just beyond the VLAN. So for example, creating a route server configs. So listing the network features, lists all the route servers as a first step lists all the route servers but you can also think of other features which an IXP has. And then of course we need to create a config for assessing that route server. That's a network feature config.

So, again, here we need to provide an implementation contact, an ASN, an AS‑SET, a session mode, session type, and if it's a transparent AS when the routes are being distributed.

So clicking on send creates this configuration, and now we're ready to set up our route servers to point in the route server. So after just there is a confirmation we click again on list all the network features, and here you can see the configuration again. So I can use this thing to directly generate the config out of it.

So now we're seeing the back end of DE‑CIX and we are proving to you that it's actually happened in the back end of DE‑CIX. We're searching right now, it could be a bit quicker but at least we have our local bouncing, but any named corporation you have just seen before posting to the API has been created, so we find that any named corporation ‑‑ we can see that there is a service, so we click here on FR A. And the internal name for a network service config here, DE‑CIX is peering access service and you can see those ‑‑ this is the information that we already transmitted over the API.

And here we have the route server configs.

So, we could switch back to the slides.

Of course, we just started with the easiest thing creating peering services on top of the IXP. But going forward of course, we need to add Cloud connectivity and private VLANs. We are thinking of doing that over the time of the next six months. And most probably in parallel.

So ‑‑ and of course we take a look at these things. Statistics, monitoring and the physical world.

And here is your homework. Any of those, I think now there are no back seats, you already lifted your hand, and your homework is this. Implement it, please!

And let me quickly also explain to you how we interface with this, how this board interfaces with everybody else.

So, either with members, they of course can contribute by giving proposals, and the pilots, they are already resellers in the market who have already implemented it in a better programme and they are giving us the feedback so we're using that feedback in order to create a better specification.

Thank you very much.

CHAIR: So do we have any question?

AUDIENCE SPEAKER: Yes, from an anonymous question from the chat. Where did you get your definition of an IXP from?

SPEAKER: Oh, so we are quite big IXPs and we have the, let's say, we're quite confident that what we're doing is an IXP. And so we're sitting together and of course it's not just one IXP saying this is how you make IXPs. But it's the actually the three biggest IXPs there which are doing this. And yes, we're confident that what we're doing makes some sense. However, we're open for any proposal that comes in. So we have a website IX ‑‑ API.net so you can go down and down loud all the examples. You can get in contact with the board and, with the group and, happy to do so.

That's what you were asking.

AUDIENCE SPEAKER: Michael Richardson again. I see how that saves the IXPs a lot of effort and time and energy. I'm not quite clear how, for ISPs, who are connecting or other entities that are connecting, it seems like there is usually a lot of foot work to actually get the cabling and other stuff happening and that's where most of the hassle seems to be. And so I'm not actually sure that there is much incentive for them to do this at this point.

SPEAKER: Yeah, I understand what you're saying and as you see, we are heavily relying on operational contacts because we know that foot work happens. However, having said that, we have on the road map the physical world as well, so, of course cannot automate the people that are actually doing the foot work. As long as cannot have everything being fully automatically plucked by a robot inside of the data centre, we have this foot work, but I think for us it's a good first step going forward to have this automated and we have of course having this on the road map to further automate. But, I'm really, really happy to get your ‑‑ the requirements you are having as an ISP to incorporate into that.

AUDIENCE SPEAKER: Hi. Niels, ACK nigh. If up add information about your maintenance Windows to your API, then I will never need to ask about my circuit ID again.

SPEAKER: That's cool. Yeah, please give us feedback. That's really important. You're right. And by the way, what's really important when you are creating an API is that you eat your own dog food or something like this, right, so you also are the own implementer of your API. That also means we're creating our portal, Newport al based on the API. So whatever we already have as user experience in there should also be covered by the API.

AUDIENCE SPEAKER: Another chat question. First a comment. It feels like you have created a mechanism to turn an IXP into an ISP through automation. At what point does remote participation dilute the value of local traffic peering locally?

SPEAKER: I think ‑‑ I would not agree with that, with that observation. It's still an exchange LAN we're providing access to so the consumption of the IXP exchange LAN will still be the same. So we're not creating ISPs, we're not automating ISP services but it's still an IXP, and of course adding Cloud connectivity, adding private VLANs and so on.

AUDIENCE SPEAKER: Michael Allied Irish Banks son. So, if you can, by this work, try to get the standardised APIs into ISPs generally, so not Openflow threes three and also drive these for ordering stuff from data centres and have one API instead of having a load of different Exel sheets and stuff for ordering stuff, this is a great step. It's not like driving automation. Just the fact like if you could start by creating a web frontend so that people can,instead of having unique document or e‑mail around, there is something like a transition way of, where some people it's going to be fully automated and fully use this API. Other people were still going to have manual processes and as you said you want to use the same API internally, so that you have a way of getting from you know, where we are today to the brilliant future of everything standardised. I think this is great and you can work with the people who are involved in other parts of this delivery of setting up this connection and getting them to standardize something that would be super.

SPEAKER: Maybe as an example of Cloud detectivity. So we're talking with a Cloud connectivity produce another Cloud connectivity producer, and they have read an API for Cloud connectivity and we're open to learning about the experience they were having. We're probably not just taking their specification, but of course, they made a lot of experiences in that and we need to integrate that here as well.

AUDIENCE SPEAKER: So cross‑connect is one thing, or there is a lot going on it's done in a very different way depending on your organisation.

AUDIENCE SPEAKER: So, I have a question after all. If the goal of this exercise was to create an open API that everyone should implement. Why did you choose to develop it in isolation between the three largest IXPs instead of collaborating with the people who manage the largest deployed IXP platform on the planet, which is currently IXP Manager, or the Internet Exchange already had APIs from the start which I happen to know one or two things about.

SPEAKER: Yeah, so, I understand this is clearly a trade off that you have to make in order to get started, right. In order to get started, you start with a small set. So you start small and want to get going. To be honest, I cannot answer this question in full, because I was not part of creating the governance modelling that exists. But, I think I would be able to follow up with you on that.

AUDIENCE SPEAKER:

REMCO VAN MOOK: I would suggest that trying to develop an open standard behind closed doors is a false start at best.

SPEAKER: It's not closed doors. If I may go back to one of shows slides that we're having. We're asking you to contribute. If you go to the website, IX API.net ‑‑

REMCO VAN MOOK: That's not what I meant. You started this in isolation. You started this without asking anyone else. And now you come and say help us along. That's what I meant by the false start. This is not a good start to get an open standard.

SPEAKER: I am sorry that you have this experience, but maybe you can help us make it better in the future.

REMCO VAN MOOK: You have my API documentation.

SPEAKER: Okay.

CHAIR: Okay. So I think that's done like that. Thank you very much.

(Applause)

And so next up is Susan who is going to talk to us about IPv6 adoption for IXPs.

SUSAN FORNEY: I am from Hurricane Electric and I am going to talk about picks adoption over Internet Exchanges. I wanted to start out with a pro IPv6 disclaimer, because Hurricane Electric has worked to advance IPv6 deployments generally. We got our first IPv6 allocation in 2001, which is three years after they started give them out. Our network completed a complete native IPv6 backbone conversion in 2007, which means we have been doing native backbones for twelve years, and we peer with more ASNs over IPv6 than any other network on the planet. We currently have 5028 as of today different ASNs.

So just to state my politics upfront.

I am going to look at how the top European exchanges have progressed in the adoption of IPv6 and I started out with the assumptions that Internet Exchanges historically are where we grow the Internet and increasing IPv6 traffic across exchanges is going to start with increasing IPv6 peering. And those seem to be not terribly controversial.

So, behaviour at the European exchanges shaped the industry and I want to throw out a few pieces of IPv6 and IX trivia to keep in mind. First is that 27.3 percent of all existing networks globally advertise IPv6 prefixes. There are 622 Internet Exchanges worldwide by Hurricane Electric's count and 241 of them are in Europe which includes the six new Internet Exchanges that formed just this year. I also would like to point out that the first Internet Exchanges of that documented beginnings FiSix in Finland and NIX in Norway, are European exchanges, the exchange history in Europe is very, is fundamental.

So, Europe also holds 11, hosts 11 of the top 20 exchanges in the world. No other region has more than three.

Peers are more likely to have both an IPv4 and IPv6 IP address on the European exchange than on other exchanges in other regions.

So, how I started out thinking about this talk was that I was doing some peering with a couple of people in Europe and it happened to be on the same exchange and when both of them said they wanted to peer, they said just look up my information on PeeringDB, so I did and they had IPv4 and IPv6 addresses, so I configured the peers, and one of the peers came upright away, IPv4, IPv6, everything is great. And the other one I noticed that the v4 peer came up, the v6 didn't. I e‑mailed back and said we seem to have having a problem making adjacency here, can you help me trouble shoot. He requests we don't actually peer on IPv6 yet. I am thinking he has an address, it's in PeeringDB, that would seem like ‑‑ why would that be? I wonder is that common? Do a lot of people do that?

So, I decided that for the sake of this you can at that, I would have to develop some scope. So I picked the top tenure mean exchanges by member in Europe and that would be AMS‑IX, DE‑CIX Frankfurt, Linx, the Juniper LAN, data IX in Frankfurt, EPIX and both in Warsaw and in Krakow, Moscow IX, NLix and France IX and Paris IX, they were the top ten I chose when I wrote this presentation.

So, those are the ones that I wanted to look at. I thought okay, I'm going to look at how many people have assigned IP v6 addresses on the IXs. Then I started looking and thought that doesn't tell me anything. What seemed to be more useful was to figure out how many people had not just by members, but how many people had IPv6 addresses compared to the amount of v4 addresses they had assigned to them? So, that gives you this graph here where we see that it's, you know, around, close so a hundred percent on most of the changes. Some of them I think are a little less just because they are being a bit more honest about what's actually being used per what's assigned. Maybe people are only asking for addresses on these exchanges if they are actually going to use them.

So I started with that. And then I thought okay, well, then, how many of these addresses are actually peerable and to come up with that I looked at the ones that were in our tables, and were actually pingable on the exchange and my assumption being that if you had a workable address on the exchange, then chances are you were available for peering. And when I did that, I get this data here. Now, this is a great presentation to be doing in the Netherlands right now, because your peering exchanges are awesome. And AMS‑IX and L IX are right up there with the top amount of data we're having for the amount of IP addresses that are listed in their exchange. The majority of them are reachable and peerable on the exchange, that looks really good.

Then we start to go down and we see it starts trending down towards EPIX in Warsaw which actually only has 11% in this particular, when I ‑‑ I thought well, okay, that was interesting, but maybe we have some room for improvement, but just to be on the safe side let's get some more data. I thought okay. Let's look at some more European exchanges, maybe not the biggest ones and I picked out you know, some of the popular changes, a little bit more mid‑range in terms of membership. And the IP, assigned IP address ranges between 80 and a hundred percent at these exchanges, and I got, I picked FICIX, and then INEX in Dublin, U A IX in Kiev, SpeedIX, Swiss IX, BC IX, NIX.CZ, DE‑CIX Madrid and things in Poland. All of those exchanges to see if we had some sort of different trend.

Then when I started looking at that data. With really wasn't all that different. I have a lot of exchanges where we have you know around 80% of those IP addresses are actually reachable and ready to peer. But then we trail off towards the end and we have a bunch of exchanges where the majority of the IP addresses that signed are not available for peering.

So, I wonder is there a way to correlate that data? And it did seem like that you know, as I got a little bit east in Europe, that the tendency was I wasn't getting as much availability for IPv6 peering. And so I thought well, I could take a look at that. I remember doing another, some research for another talk and I noticed that there was a correlation between the percentage of the population that are Internet users and the success of Internet Exchange. And it seems to correlate in this case for the IPv6 availability.

So, if we look at the Netherlands is very saturated, so the UK actually. I should have put them on the slide. They are also at 95%. Germany, Switzerland, Finland Spain and Ireland, they are all high. As you get to these places where the Internet isn't as highly saturated among the population, that correlates to not as highly available for IPv6 peering.

And that didn't completely not make sense to me, because when you start you know Internet in a country, of course you start with v4. Even today in places like Africa and South America where you are starting out and there is very low saturation and people are just building networks, they start out with v4 and it's not uncommon to find v4 only. So that didn't seem that nourish. But it kind of does pinpoint where the work needs to be done.

So, in other words, these figures show that you can assign a network in IPv6 address but you can't make them peer, so, based on the what we just saw, a few assumptions seemed reasonable. IPv6 is alive and well on European exchanges and it's actively encouraged. A lot percentage of western European networks are routing IPv6 and in most exchanges the numbers is higher than the number of I don't think so in the global average that are doing IPv6, which is 27.3. As a matter of fact, I thought okay, what if I took those 20 ISPs and I averaged all of those, the IPv6 reachable addresses. I came up with actually 54%, which is double the global average. So, all in all recollect the European changes aren't doing badly.

If a network isn't peering over IPv6, I'm assuming it's probably because they haven't deployed it.

Okay, so, if you have been sleeping through most of this meeting, this might be the first time you have heard that RIPE has distributed its final /22, which it did, on the 2nd of this month and they are down to a million or less addresses that they'll be distributing on the remaining space that they have, the remaining allocations that they have.

So, you know, and if you are thinking that, well, I can NAT my way out of this, That's true. But, eventually you are going to need more IP addresses. You can't just use the same ‑‑ you know, the one IP address doesn't NAT the whole Internet. You have got to have more, and it's not just, you know, Europe is rather saturated. There is a huge amount of Internet adoption already and you think not a lot of growth. But the reality is we're global. There are lots of places in the world where more and more people are coming onto the Internet everyday. There is more IP addresses going to be needed and those users are going to want to reach you and you are going to need need to be able to have IP address to say handle that.

Of course you can still buy IP addresses. Current pricing that I saw on the, on some sites was around $21 average per IP, if you were getting 24, 23 or 22 it could go up to $21 or a range of 19 to €22 depending on what IP too of allocation you were looking for.

If you believe some people they say that prices are the IP addresses are going to double over the couple of years. All I know is what's happened in the past, and you know it was back in 2011 that Microsoft bought $666,000,624 addresses for 7.5 million from nor tell and established how much an IP is at roughly 11.25 dollars an IP, and since then now we have seen that, that was in 2011, it's only in 2009 and the demand wasn't as my as it is now and it's already doubled. So I would say it's not unreasonable to expect that's probably going to happen. I don't know when. If I did, I'd buy some IP space.

IPv4 transactions are increasing year over year as the RIRs exhaust their allocations. And I took a look at that because it does kind of demonstrate where we're going.

So you see this data is from our friend Geoff Huston, I took this off of his site. And I accurate him in the end with the slides. That we see the trust anchors since 2012 to 2018, the last full year of data, that they are ramping up as these RIRs are exhausting their supplies, and I expect that to continue. And it is you know, a barometer of what the market is going to be like.

So, it's obvious that the increasing number of transactions is a reflection of demand, and demand is going to, what will increase the price of IPv4 space.

IPv4 addresses soon will be available only through address brokers, and as more users are added to the Internet the demand is going to rise. So, the marketing and transfer of legacy IPv4 blocks means most networks will get IP space but demand and speculation are definitely going to put pressure on our prices.

It's obvious that more peering on the Internet Exchanges is going to drive more IPv6 deployment. What might not be as obvious to most people, is that the growth of IPv6 networks favour those who actually aren't that interested in IPv6 and want to stay on IPv4. Because, when more traffic moves to v6, then that means that the demand for IPv4 resources lessen, so if you want to be one of those people stand can in the corner and be the last ones on your IPv4 space, you should be encouraging all of your neighbours to go to IPv6 so you can go and get rid of using IPv4 so you can use them all to yourself.

So, when networks don't deploy IPv6, they put pressure on the IPv4 supply, and that increases prices and the cost of operating networks. So, that is your motivation, no matter what your politician, if you are pro IPv6, obviously you want peering. If you're not that excited about IPv6, you should be encouraging it. One way or the other, we should get more peering.

So if you do think that more networks need to be routing traffic over IPv6, there is something that you can do about it. First, whenever you peer with someone on an exchange, ask them to turn up their IPv6 session. When you ask them for the IPv4 session. You'd be surprised how many times people do not do this and what it does is one, it turns up more IPv6 peers, but it also helps create the expectation that people are peering over IPv6.

The other thing I would tell you to do is advertise your IPv6 prefixes and tell other networks to advertise theirs and you think well wouldn't they do that? I can tell you I turn up peers with IPv6 all the time. And it isn't uncommon when you turn appear up for them not to advertise any prefixes, because a the low of time people will just you know filter 0, you know, so that there is no prefixes being advertised and then go back later during an maintenance window to change the routing, that's not uncommon, but what I found is that later, sometimes I go back and look at these peers and they still aren't advertising any prefixes, they just kind of turned up the peer because, so you know call people on it, say, hey, I want your prefixes and ask them for them. Create the expectation they should actually be routing this stuff.

And then check back with your IPv4 only neighbours from time to time, to see if they have added peering. If they have, go ahead and you know, peer with them and get that done. But just continue to ask, because you never know.

So, in summary, I guess I just want to say that IPv6 participation on Europe's Internet Exchanges is better than the global rate. But obviously there is some room for improvement. And while adoption continues top increase, IPv4 is here to stay for the foreseeable future and that no matter what your protocol politics, peering with IPv6 will help you achieve your objectives.

Does anyone have any questions?

(Applause)

CHAIR: So questions.

AUDIENCE SPEAKER: Jen Linkova. IPv6 Working Group Chair. Thank you very much. I have a few comments, actually very good timing because I have been participating in some discussions when on the one hand people who deployed v6 in their networks seen up to 70% of traffic moving to v6. And at the same time, if you look in Amsterdam exchange they report 3 percent of the traffic being v6. So I suspect percentage of people peering has nothing to do with traffic level because it looks like most of v6 traffic is not going through exchanges anyway.

SUSAN FORNEY: I think there is some truth in that, but I also would argue that if you do not have a network connection the traffic is never going to go and I would go back to the old if you build it they will come. And they certainly aren't going to come if you don't build it.

AUDIENCE SPEAKER: Just wanted to comment on your point that happen there is no v6 for this network on exchangeses that probably do not have v6 rolled out. I think it might be some legacy stuff when people turn on their phone and did not enable v6, have you looked at situations in percentage of networks where they do not have v6 own exchange but they still advertise v6 prefix through some other peerings?

SUSAN FORNEY: No, I didn't look at that. But that's, you know, honestly that might be good to follow up on.

CHAIR: Okay. Then we got the chat question.

AUDIENCE SPEAKER: On NLix, are there more version 6 addresses reachable than assigned?

SUSAN FORNEY: Yeah and I think the reason for that is I'm looking at data from my a BGP at H E A.net and I think it's possible that the IP addresses that I can see when I am life on the network which, Hurricane Electric is present on all these exchanges so I can go and look at the live data, is different than what the report is. So, it would be possible ‑‑ in those cases it's a couple of addresses, it's not you know, hundreds.

AUDIENCE SPEAKER: And a comment. I didn't see NetNod on the slides or Sweden. Let me just quickly read through all the comments. I think that is all I need to read out loud. So that was just a comment, there is not a question attached to it.

AUDIENCE SPEAKER: Black, multiple hats and I have a comment not a question, so I will see does anyone have another question first?

AUDIENCE SPEAKER: It's more an answer to previous question, so concerning the more addresses assigned ‑‑ reachable than assigned. You obviously have some monitoring system that needs to be in the peering LAN. So you can find the things are not publicly assigned but they are there. This is one of the point. And concerning the percentage of IPv6 traffic on IXPs, I was previously working in an ISP and what we saw is that between the two best performers in IPv6 one of them is pursuing quite, let's not say aggressively, but is pushing pretty hard BMIs. This is one of the (P M Is) and this is one of the two pest performers in IPv6.

SUSAN FORNEY: I think that's true.

AUDIENCE SPEAKER: Blake, first with my Zayo hat on, as a tier 1 carrier I am more open in my peering policy in v6 than v4, also my peering policy says I will not turn up any new IPv4 only peers, you will be dual stack or v6 only but not v4 only. If you want v4 only, I will deploy v6 or at least turn up session and come back later.

So with my other hat on as a consultant not only for a work called eyebrows, we have not deployed IPv6 yet internally, but our standard procedure is to at least turn up the sessions as we turn up v4, turn up v6 as well even if I don't announce anything, I suppose I tell my peers that, but the sessions are there so when I start announcing, all I have to do is start announcing and I don't have to go back around to all pie peers and say hey, let's tun up some more sessions. (My peers)

CHAIR: I think that's it. I don't see any more questions. Thank you very much.

(Applause)

And so next up we have got Marco with the direct consultation for the end point network.

MARCO HOGEWONING: Good morning. I work for the RIPE NCC, I am one of the five Marcos. While we were polling, who has seen my e‑mail to the list? Who has read it?

Okay. A few, for those of you who don't there is an e‑mail on the list that has a bit more detail, I don't have a lot of time to talk about it, but this is about the direct consultation about the identification of the network determination point. Why am I here? We saw this coming, this has been bubbling away in Brussels for a bit and talking to the chairs we thought this was the most appropriate venue. This is still about the tubes that make up the Internet. Although this is about the end that lands in the users home. So it's slightly off from the normal topic here.

So, what's this about? BEREC. If you are unfamiliar with them, BEREC is the organisation of European telecom regulators, and the make up the union, assist the European Commission in drafting legislation, but more importantly also when the directives come out, BEREC works on implementation guidelines, helping the national regulator to implement the rules and enforce the rules, and that's quite a very important role because that ensures that the single market really is a single market, there is a level playing field, etc.

Also, as it's mandated it's kind of told to do that in corporation with the industry and that means they hold regular public consultations about these guidelines, this is one of them, there are currently six consultations, I picked this one because I thought it was relevant.

So this is about the network termination point. And recently the new European communications code defines the NTP. The text is here, there is a bit more surrounding it. This is not entirely new. The previous text also has text around the NTP. What makes this difference is it's broad erred and focused on all IP networks, with a bit of imagination, you envision even an up box for digital television to kind of come into play here.

As the code is now out there, BEREC is working on implementation guidelines, so they have drafted a set that's now open for consultation, and in I can it, what they focus on is whether the equipment is part of the network termination point or not.

This is quite important because remember net neutrality, the net neutrality guidelines end users shall mine and use terminal equipment of their choice. Talking about the same. Bottom line what we have here is your modem.

So, the BEREC relies on net neutrality added to this that it's important ‑‑ the NRA should monitor this and they should really look into whether the user has this choice, but it says, yeah, there might be objective technical reasons why a provider would mend the ITUs of a particular type of terminal equipment, a particular type of modem.

Further down the line, implementations validator in national legislation. In Germany it's there. It's also highly valued. I think the Netherlands has some provisions there. I think the iPhones, there is also a the low of countries who didn't even the implement the specific rule in national legislation. So here we are are. We have now got different texts all kind of aiming at the same.

So, this is where the current consultation really physics on. Is your modem part of the NTP or not? You can see where this is going because when it's part of the NTP, it's part of your provider's network recollect it's their problem. If on the other hand, the NTP really is the wire and the user should be able to go out down to the nearest shop and buy something, yes of course if it breaks he owns the pieces.

The draft guidelines in this case lay out the dinner options. They are very helpful. They have also good really nice little drawings that show the different options with the demarkation points and also there is a big chapter that lays out some of these more objective technical reasons why shouldn't this be the case etc.

So, yeah, it's public consultation, which means we, or you, as a stakeholder, can provide input.

We don't really have an opinion, basically we're really neutral as an NCC perspective. We see benefits and down sides of them all, but further down the line, whatever side the coin drops, we have to adjust. For instance, you can envision when we talk about IPv6 adoption, it's a different conversation with an operator that owns all the mode items, controls them. Versus oh yeah, I can switch on IPv6, but who knows what's going to happen with all my consumer mode items. Further down the line you may have seen presentation about spin, a project to do IoT security, there is the framework in the IETF and all these kind of new technologies need a chunk of market space to be viable, they need to work, you need sufficient momentum. Bottom line here is whatever the chicken and egg problem, is sometimes is easier if you know that single egg you are breeding actually contains a million chicks.

So further down what we can also see is, if you leave it out to the market, to the user to go down and buy the modem probably cheaper will be the preferred option, you will probably see more talks about regulation, mandatory certification of mode items making sure that to protect the consumer, that the modem does what they need to do.

It can go either way and we'll just adjust our strategy accordingly.

So why am I here? Well what do you think? We think this is a nice opportunity for RIPE as a broad group of experts to look in this and provide input. And we're happy to communicate that input on behalf of RIPE based on rough consensus in this Working Group. And it's really that, because I don't see that there is much value in us going back and saying like, yeah, this is group that says A, there is a group that says B. We can use your influence here, we can provide guidance based on technical arguments but it has to be to the point.

So look at the draft guidelines and help form that opinion on the mailing list and I'm sure we can help draft that. But when you look at it, think about it. Is there any harm in what's proposed? It's like the PDP last call. What do you think is the best way forward? This is probably still a point where we can tip the scales a bit in either direction. I'm not saying it's going to be fully adhered to, but we can try and influence it.

And also importantly, as you are the technical experts here, have a look at those technical arguments. Are they really fair? Are they really objective and I'm sure BEREC will appreciate our feedback on those.

To be clear, if there isn't a consensus, we will not submit a response, but of course it is a public consultation, to you can always make your own contribution. You can also just sort of, if you have a very diverge rant view to the group's consensus, still come in and say wait a minute I think it's different.

You have seen the mail our you can look up the mail on the mailing list. If you look at the BEREC said it says the deadline for this is 21 November, but of course we need a few days to process it, I need to coordinate with the chairs on the final response making sure it's all aligned, so we kindly request you to try and form that consensus by close of business on the 15th November. That gives us around three working days to sort it all out.

I really recommend looking at the full guidelines. Here is the URL and the Q R code because the URL is really really long, it's also in the e‑mail. Please have a look. I am looking forward to your opinions, as I'm here, the main call is somebody needs to get the ball rolling, somebody needs to start the discussion and we can help its shape from there in our role as secretariate. Thank you for your time. Looking forward to your responses and maybe the Chair allows me to have one or two clarifying questions. I know we're very much out of time here.

CHAIR: In the interests of time I suggest that we take this to the mailing list. So, thank you Marco.

(Applause)

Next up is Florence with the connect update.

FLORENCE LAVROFF: Hello everyone. So for the connect update and for the I am jumping in. Bijal unfortunately can't make this today, so Bijal if you are watching us right now. We are thinking of you so I'm just going to go through the couple of slides that she has provided us.

So let's get this started. Okay, so, the update provided by Bijal is about 7 IXPs that you can see here. So, the first one that we have here is mix, so mix is having their salon in November. They are going to celebrate the 20 years next year and they have a new reselling programme.

Next one is DE‑CIX. So DE‑CIX has now four locations, they have a fancy academy where you are learn a lot of stuff. You are free to have a look. They are also interconnecting Madrid, frank /TPHURT and Marseille to make more peering options for the connected.

Next one is GR IX, our friends from Greece. So, an interesting fact about this, the brand new and is doing fine.

So the next one is Interlan, our friend from Romania. So as you can see, they are starting to connect peers on 40 and 100 gig ports.

Next one is LONAP. So, an interesting callout here is a support on BGP large communities. Well gone guys.

And NetNod. Our friends from Sweden. So, as you can see here, we have a new product which is the IX VLAN which is allowing more remote peering options. And to finish with this presentation, the last slide.

Is about IXF, so here we have some links about where the data is available, and you can see here that there's been an update of the expert JSON. I think that is it for this presentation. If you have a question, I think that the best is to go to the people working for the IXs mentioned, so if there are people here from mix, etc. Please raise your hands so that people can go back to you and fine you in the room. So do we have someone here ‑‑ okay, thank you guys for raising your hands, someone from mix, I don't see anyone from mix here. Ah ‑‑ thank you guys. Already. Do we have another minute?

CHAIR: I think we have. You can do your updates.

All right. So, the next one is another quick update about cool publication and stuff that we could see all around our beautiful connect topic.

The first point is about EPF presentations so a lot of school stuff there in particular I'd like to call‑out the one from Job Snijders about the difference between an IXP and an ISP. And there has been some other cool other presentations on RIPE Labs. So of course we have the one from inNAS /KAS trow, who presented that at our last connect section back in Iceland. And then the other one is routing to and from the Netherlands, so, the chairs of this Working Group really like all those updates. So thank you Marco for submitting this one to RIPE lab.

And this is it. If you have like any other callout or any other comment, feel free to do this right now and come to the mic.

CHAIR: Thank you. Thank you Florence. So, we are running short, like we're actually a bit late right now, we have one or two minutes late so I suggest that we drop the off line discussion if that's okay with everyone. Okay. So no rotten tomatoes. I'm okay. So no questions for Florence or comments? So that that leads me to the closure I think. Any other feedback, yeah?

Oh, I have one mention. We suggested, we did a policy proposal last RIPE meeting for, to increase the size of the IXP pool from a /16 to a /15 and so that has been accepted.

So that's a good one.

And so ‑‑... okay, so then, we thank you very much for your attention, and we will tell you, or see you like at the next RIPE meeting. Forget to rate the talks and with that we close the session.


LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.