Breakthrough to the Other Gateways: A Deep Dive Into the Gateway Network

45 min video  /  33 minute read
 

Speakers

Brad Fischer

Sales Engineer III

Inductive Automation

Joe Rosenkrans

Enterprise Support Account Manager

Inductive Automation

Multi-gateway deployments are becoming more commonplace, and Ignition's gateway network provides the backbone for redundancy, enterprise management, and sharing data between gateways. Join us for this session and take a look at various Gateway Network parameters and settings that drive customer solutions.

Transcript:

00:01
Bradley Fischer: So, there are many ways to share data between Ignition gateways. Scripting, OPC, shared databases, MQTT, APIs, etc. But today, we're gonna focus on the Gateway Network. It can be leveraged to ease gateway loading by splitting the load amongst multiple servers. In addition, network and functional segmentation can be attained through its use. For example, the back-end gateway near the left could subscribe to 75 devices while handling data collection and alarming duties. By using the Gateway Network, it can provide the front-end gateway with real-time tag and alarm information. We'll look at some examples later on, but first, I wanna give Joe here the opportunity to describe how the Gateway Network operates.

00:56
Joe Rosenkrans: Alright, so first, before we discuss the architectures, I wanna talk about all the knobs and levers that you can adjust and establish some ground rules and key aspects of the Gateway Network. And I am going to try my best not to say Gateway Network a lot, but it comes up a little too organically. So, the Gateway Network is established with a WebSocket. You have an incoming and an outgoing side between our gateways. It utilizes message queues, and then it also allows secure gateway-to-gateway connections. The WebSocket itself is bidirectional, so you only need one. We have made improvements to the Gateway Network over the years, so you shouldn't be able to make duplicate Gateway Networks, meaning incoming and outgoing going on both systems. The outgoing side is gonna negotiate the WebSocket by sending out an HTTP request and then requesting to upgrade to the WebSocket and if SSL is needed. The, if SSL is needed, it will start as non-secure and then move over to the secure port.

02:01
Joe Rosenkrans: And, as I mentioned before, we have an outgoing and incoming side. So, the way that I always like to think about this is that we are talking about a highway. The outgoing side is going to be the lanes that your traffic utilizes, and depending on your implementation, you might need more lanes, you might need less, and it's gonna depend on what you're doing on the network. Of course, more lanes is gonna correlate to more CPU usage. And then, on the incoming side, we have a shared web server pool. This is gonna be utilized by many subsystems. It's not dedicated for the Gateway Network. A couple of those include the Vision clients and the Ignition webpage. I just wanna emphasize that that's shared. So, if there's any saturation issues, you're gonna see it beyond just the Gateway Network. So, if you imagine a hub-and-spoke architecture with 50 gateways connecting to one system, that might become exponential. So, if you're running into issues with Gateway Network stability or unsure if you need to make adjustments for the amount of lanes that you have on your highway, the Support Division can help you identify if there is actually a root problem, or if, based off of your implementation, you simply just need more lanes.

03:34
Joe Rosenkrans: Now, the incoming side in this metaphor is going to be the exit of your highway. So, if you have a dedicated set of threads on one side and a shared pool on the other, those could be areas of contention. So, what happens if those threads become oversaturated? What if your remote services start having issues? How do you know if you need more lanes due to high traffic? Or maybe there's a car accident and things can't get through. Alright, so let's address what happens with the traffic jam first. This is your bumper-to-bumper traffic, all the lanes are in use. It might be moving slowly, they might be at a dead stop. In this case, that is gonna be your send-and-receive threads, and they're gonna be fully saturated by the services that are coming across them. The main thing that I wanted to point out is that you can monitor the send-and-receive threads from our status page, and you can see if they're becoming points of contention.

04:38
Joe Rosenkrans: Because what you're gonna do is see how many tasks or messages are across a receive thread or a send thread, and you can compare it to see what those tasks are with the long-running or outgoing, sorry, for the outgoing or incoming tasks listed below. So you can see if maybe there's a task that's just running really slow and holding that lane up, or you can see maybe there's a really high frequency from a particular subsystem that you might need to take note of. So, what do you do if your traffic jam is caused by an overzealous tag history query taking up all of your lanes? So, your system is saturated with tag history requests, you're getting overlays and errors on your remote services, such as your remote tag providers. So, what if you could turn that traffic jam, where all your lanes aren't moving, into just a car accident? So that it's just one lane that's getting congested and the other lanes are allowing traffic to go through.

05:44
Joe Rosenkrans: Well, that's where managing your message queues could come into play here. So, the message queue is gonna be your on-ramp onto the freeway. And what you can do is prioritize by flavor of data, kind of like using the carpool lane. You can say this guy is gonna matter more, so he's gonna get access to that send thread before another system. So, this will allow an excess amount of messages to be restricted. So, you still will run into an overlay.

06:19
Joe Rosenkrans: So, if this was setting a limit on a system to utilize the send-and-receive threads, but you're hitting a limit on the tag history. So, maybe your easy chart on the remote side is still getting an overlay, but all your alarm data, all your remote tag history, all those other services are still making it through. Now, you might be wondering you're trading one error for another. Your services aren't responding 'cause all the lanes are in use, so now you've got a specific service that's having issues because you've limited the amount of messages coming across. And I agree with you. So, in those scenarios that you feel that there is a need to limit the number of messages, I highly suggest talking to our Support team and we will help you identify if maybe there's a tag history query that's taking 10 times longer than it's supposed to, or maybe you have a runaway alarm query that is much more frequent than you need it to be. Because those are normally the root cause and putting on a limit is just trading one issue for another and allowing other services to work.

07:28
Joe Rosenkrans: Now that we've discussed how to get on the freeway, which is your outgoing connection, we need to turn our attention to the incoming side. This is your off-ramp. And the process, they have a processing queue, which represents the total amount of traffic being received. So, it's not broken up by flavor, it's not broken up by receive threads. And so, you're on-ramp through the processing queue, basically allows you to get into your city, you do whatever you need to do before getting back on the freeway. But, you can, the processing queue is your buffer. So the raw amount of tasks that are coming through as the incoming gateway is processing those tasks, if they start to build up, this is the queue that they build up in. And, if you hit the max allowed messages, that car is gonna miss the off-ramp. There's nowhere for it to go, the queue is saturated.

08:26
Joe Rosenkrans: This is a configurable setting, and it is something that you can monitor through the status page, which is gonna have the processing queue, because there is no send and receive. And again, you can monitor the tasks, so if there's any long-running tasks, they'll be, hopefully, easily visible. Alright, so, my poor highway metaphor is at an end. But, I wanted to point out a few other key aspects that are gonna come into play for the incoming connection. So, you can define two-way authentication. That means it might not be SSL on the outgoing side, but the incoming side can require that they both need to use SSL. The connection policies will allow you to dictate who is connecting between your gateways. You can have it set to unrestricted, so anyone can talk to anyone. You can have it set so you have a whitelist, so it's specific IPs or host names. You could also have it set to approve-only, so you have to go in there and actually approve the request before it becomes running. Now, Brad's gonna elaborate this, or, sorry, elaborate on this a lot further, but I did wanna mention that on the incoming side is where you define how many proxy hops. This is gonna come into play with everything Brad is gonna talk about...

09:54
Joe Rosenkrans: ...in length, but I just wanted to say, the incoming side is where you configure how many proxy hops. It is 0 by default. We've made improvements. If you remember all the way back to 7.9, this used to be a Boolean, and it was true, and we've learned from that, and now it is an integer. You can find how many hops, and the default is off. So, Brad, it is up to you.

10:24
Bradley Fischer: Yeah, thank you very much for that explanation, Joe. It's important for us to understand those knobs and levers that you described behind the scenes in the Gateway Network that allow us to do some tuning to make sure that it operates the way that we want it to. It's important for us to also remember that the Gateway Network operates point-to-point connections, but it has evolved to handle more complex architectures to meet user demand. This scale-out architecture includes Gateway Network connections between each of the three nodes. Real-time and historical tag data, as well as audit logging and alarm journaling, can be shared between each. As you can see, any time a gateway needs information from another, it has a direct connection. But if we unfold this architecture, it provides segmentation between our OT and IT layers, a very desirable security posture. You'll note, though, that gateway A has no direct connection to C. The solution to this problem is proxying messages through gateway B. The specific setting in the Gateway Network configuration that Joe just alluded to is called "Allowed Proxy Hops," and it's the number of gateways a message can pass through to get to a destination gateway. In this case, a value of 1 allows messages arriving at gateway B to hop once. Thus, messages originating at gateway A can hop through B, arriving at C, and vice versa. Scaling this up, we could imagine this architecture.

12:11
Bradley Fischer: How can we have gateway A, let's say it's an I/O gateway for a manufacturing line within a plant, connect all the way up to gateway E that's providing company-wide dashboarding? We'll start by making an outgoing gateway connection from gateway A on the left, targeting gateway E on the right. Just like we saw in the smaller three-node architecture, this isn't going to work. We have no direct path between gateway A and gateway C. Gateway A is only connected to gateway B. So we can then go into gateway B's configuration and set allowed proxy hops to a value of 1. This will allow that incoming connection from gateway A to be proxied through B and target C. But notice that I just said it would target C. We actually are targeting E. So the question becomes, how many proxy hops are needed? Just like before, applying a value of 1 only allows those messages arriving at B to be passed through once. Could we just put a large value like 999 in? We certainly could, but it isn't recommended because that can lead to inadvertent routing and possibly violate our corporate security posture. Instead, we can be mindful and take the time to determine how many hops we need at each proxy gateway.

13:44
Bradley Fischer: We wanna be purposeful about our choices and understand their ramifications. We can see here that gateway B needs to forward our connection request to three gateways: B to C, C to D, and D to E. Therefore, we set the proxy hops value to 3. For those of you that are familiar with networking concepts, this is very similar to TTL, or time to live. Each time the message is passed through a proxy gateway, the TTL value is decreased by one. Once it reaches 0, it will no longer pass through any additional gateways. We can repeat this process for gateway C and see that it needs two hops. Finally, gateway D needs to proxy once to deliver any incoming messages to gateway E. With those proxy hop settings in place on gateways B, C, and D, we finally have gateway A connected to gateway E. So let's ask another question of this architecture. How can gateway E consume tags from gateway A? We have our Gateway Network successfully connected, so let's configure a remote tag provider on gateway E targeting gateway A. And that'll look something like this. So similar to before, we're making a connection from one end of our architecture all the way to the other. But this doesn't work. Our request now originates from gateway E and not gateway A. Gateway D is already configured to use proxy hops, but it's configured to have a value of 1. Thus, our request from E is proxied through gateway D and arrives at gateway C.

15:44
Bradley Fischer: If we go back to that TTL idea, we end up with a value of 0 and the message is not proxied any further. If we follow the same procedure as we did before, we'll find that we need three proxy hops at D, two at C, and one at B. You'll note that this is the same number of hops at gateways, or at gateway C, but gateways B and D don't match. To rectify these different numbers, we need to find the maximum number of hops per gateway. Otherwise, one of our two connection paths won't be valid. Finally, we arrive at a configuration that allows gateway A to make an outgoing connection to gateway E while also supporting gateway E subscribing to tags from gateway A, all while proxying through the various gateways in between. And while it was easier for me to represent this architecture horizontally here, it's truly a vertical architecture, especially if we think about something like ISA-95, where we have an OT edge gateway connected down to some devices that's proxying up through perhaps a building gateway, a site DMZ up into the corporate infrastructure.

17:07
Bradley Fischer: But what about if we had an actual horizontal architecture where we have 10, 20, or 50 gateways that are deployed throughout a single plant floor? This brings into play this idea of the Gateway Network proxy rules. Let's assume that our facility has 50 lines, each with its own gateway. The plant manager wants information from these 50 lines to be aggregated together and displayed to upper management and engineering. Each of the OT gateways will therefore need to connect to the single IT enterprise gateway. Note, of course, that this diagram doesn't show every single OT gateway, but you get the idea. If we actually look at this architecture in the Gateway Network live diagram, it would look something like this. So this gives you a better idea of the scale that we're talking about here. You can see all of those individual connections made to the enterprise gateway there at the center. And as you can see, this is a fairly large deployment. We could take the architecture we just talked about and improve our security posture by adding a DMZ and proxying those OT connections through the DMZ up to the IT layer. The idea here would be to segment the OT gateway away from any of the other devices on the IT network.

18:42
Bradley Fischer: This ends up resulting in 52 gateways and 51 total connections. As we learned in our five gateway stack example, we need to enable proxying on the DMZ and set proxy hops to 1. But how does this affect our Gateway Network traffic? In our previous example, it was that very linear architecture. In this case, it's very much more of a hub-and-spoke. We collected some benchmark data that I'll share next, but first I wanted to clarify that we didn't create any tags, device connections, or scripts. This is solely monitoring what we call "service enumeration calls," which are basically the gateways going out over the Gateway Network and saying, "Hi, I'm Gateway 1, what services do you provide?" And it will then come back and list the things that it's set up to provide. And that's controlled through the security levels. And it can include things like those remote tag providers, remote history, remote alarming, real-time alarms, real-time tags, and even EAM and redundancy all right over the Gateway Network. In the end, when we turn on proxying, the effect is that those 50 OT-level gateways are now all connecting to each other since they can proxy through the DMZ. As you can see, our network traffic has increased dramatically with transfer rates increasing around 750% and a 4,700% increase in message accounts.

20:26
Bradley Fischer: While this supports our desired architecture, the side effect is that we have a lot of cross-OT traffic that we don't need. Enter Gateway Network proxy rules. Introduced in 8.1.34, these rules can be applied to a gateway acting as a proxy gateway. That is one with allowed proxy hops with a value of 1 or greater. Each rule can be configured to allow or deny specific connections. Similar to other areas of the gateway webpage, these rules are matched from the top down and also support wildcards. Each rule is comprised of a source gateway or gateways, destination gateway or gateways, and optional description so that we can keep track of what we're trying to accomplish, as well as the allow or deny action. Note that these rules affect all Gateway Network services, not only those service enumeration calls that we benchmarked here. For example, denying traffic between two specific gateways results in that proxy gateway, it prevents it from forwarding any messages from the source to the destination, be they service enumeration, EAM, remote tag providers, etc. We can apply these rules to permit gateways at the OT layer to communicate with the IT-layer enterprise gateway. The top rule allows connections from gateways with names beginning with the word "gateway," which is what I used in my example, to connect to gateways with the name "enterprise," which is what we named the one in our OT side.

22:09
Bradley Fischer: The next rule supports connections originating from the enterprise gateway, targeting the gateways down in the OT layer. Just as we saw with our five node architecture before. The bottom rule defaults to denying all other connections, effectively acting as a catchall for any other proxy connections that we have coming in. For example, if gateway 3 was attempting to communicate with gateway 12, or gateway 44 attempting to connect to gateway 2. Our proxy gateway in the DMZ now has proxy hops set to 1, as well as these three new Gateway Network proxy rules applied. The result satisfies our original goal of allowing the enterprise gateway in the OT layer to proxy through the DMZ and consume tags from each of the OT-layer gateways at the bottom. The last Gateway Network proxy rule we added obviously is gonna allow us to deny any other traffic, preventing it from moving through the proxy. The yellow in the chart here indicates the date of the, that was collected with the gateway proxy rules in effect. And as you can see, the rules result in a significant reduction of message and transfer rates. It's obvious that we don't want to implement proxying without these Gateway Network proxy rules. So let's hide those from the table and compare the default values where we didn't have any proxying where, against implementing proxying with those proxy rules applied.

23:50
Bradley Fischer: You'll notice that there's an increase. This of course, is expected since we do have additional connections, but the increase is more in line with what's expected. We know the Gateway Network is extremely useful, allowing multiple Ignition servers to work together. While its flexible nature supports a myriad of services including real-time and historical history, auditing, redundancy, and EAM. This flexibility can also lead to network instability. By tuning threads and message queues, we can prioritize traffic and improve throughput. And with the addition of proxy hops, it supports the increasingly complex multi-layer architectures users are deploying. The addition of Gateway Network proxy rules is yet another improvement to the Gateway Network, allowing you to break through into larger and more complex architectures. We've made some additional improvements to the Gateway Network in 8.1. We introduced the ability for EAM upgrade tasks to now resume sending files between the EAM controller and the agents that are trying to request those files. Previously, if that 1.5 gig download was interrupted at any time while being sent between those two gateways, we would have to restart sending that entire file. And we found that a lot of customers were struggling with maintaining those connections, especially in cellular and remote connections out to, say, edge gateways.

25:28
Bradley Fischer: We also introduced the ability to have those files, the upgrade file, that zip archive, stored on a remote server. So say it could be a network share, maybe a Windows or Linux, and then we could have the EAM controller inform the agents that it needed to go to this third-party location, download that zip archive in its own time, and then notify the controller when the download had been completed. We could then go through and tell the agent to execute the upgrade. There's been other improvements throughout 8.1. As you can see, the Gateway Network is evolving as Ignition does, and we're committed to continuing to improve it as we go forward. The improvements won't stop here. Ignition 8.3 has even more in store. Joe, would you like to tell us a little bit about what's been changed in the upcoming release?

26:35
Joe Rosenkrans: Yeah, so a lot of these topics have already been, you might have heard about them yesterday. Actually maybe just one of these, but everyone's wondering about 8.3. So there's a few things I wanted to put on your mind because when you're upgrading a single gateway versus upgrading an entire architecture, there's a lot more considerations. So the talk of the town, get your entire architecture to 8.1 before utilizing 8.3 would be my recommendation. Now, there are some things that I wanted to point out. One is that we've added the ability for an EAM controller to manage lease licenses. So that means your edge gateways no longer need to have connection to our licensing server. As long as the controller does, we can maintain those licenses. Behind the scenes, the Gateway Network is gonna be moving to Protobuf instead of Java serialization. This reinforces security standards. And then...

27:33
Bradley Fischer: And to be clear there, we don't have any active security vulnerabilities that we've discovered using Java serialization in the Gateway Network. If you were unable to attend Carl's session yesterday afternoon, he alluded to the fact that Java serialization simply presents more of a security plane that could be attacked and we want to go ahead and proactively move away from that. In addition, moving to Protobufs also has some performance increases and efficiency improvements that we think the Gateway Network will benefit from.

28:09
Joe Rosenkrans: Absolutely. So as I mentioned before, 7.9 cannot communicate with 8.3. That also includes proxying. So you're not gonna connect 7.9 to an 8.1 to an 8.3. It's 8.1 to 8.3, and that's the only communication that's allowed. To elaborate on that a little bit further, when you think about upgrading your architectures, I did wanna point out that because of the improvements that have been made to the store-and-forward system, 8.1 is not gonna be able to store information for an 8.3 gateway if those flavors of data transition through the store-and-forward system. So just to keep that in mind when you're being intentional about which front-end or back-end gateways that you are upgrading, that is something to keep in mind.

28:52
Bradley Fischer: Yeah, that's absolutely right, Joe. So as you can see, we're continuing to improve the Gateway Network going forward. We're really excited to see that we're moving to Protobufs, I think that's gonna enable us to do some pretty interesting things with the Gateway Network. And yeah, of course that change does carry some limitations to it. We can only have 8.1 connected to 8.3 gateways, and when we do so, it will actually fall back to the Java serialization route. But eventually once we are down the road, we have a lot of 8.3 deployments, we'd be able to really take advantage of that Protobuf serialization across the Gateway Network. So with that, I would like to open up the presentation to Q&A.

29:56
Joe Rosenkrans: We're waiting on the microphone.

30:01
Audience Member 1: So with the performance data that you shared, is it accurate to infer that the difference between when the proxy rules were enabled and when they weren't enabled was due to traffic between gateways, peer gateways, gateway 1 and gateway 12. Okay. And was that for the interface, those statistics, were they for the interface on the enterprise, or a particular interface?

30:29
Bradley Fischer: I did collect that information at the enterprise-level gateway. Yes.

30:34
Audience Member 1: Okay. So is there still a lot of traffic going to the proxy server in the DMZ? Is that something that we have to be cautious about?

30:43
Bradley Fischer: There is gonna be additional traffic going there. The way the service enumeration calls work, they actually are hard coded at one minute. The idea is that any gateway would go and try to interrogate the other gateways that it can talk to and get information back about what that target gateway supports, right? Do you support a remote alarm journal store? Do you support remote alarm pipelines? There's a lot of different services that we offer over the Gateway Network. So those requests will still go out and hit the proxy server, but then it's gonna go immediately to the proxy rule list. It doesn't match the first one, it doesn't match the second one. It matches the third, and I'm supposed to throw it away, so that connection gets dropped there and isn't passed along.

31:31
Audience Member 1: So, and that little chatter isn't... That chatter is not something that we need to worry about from a performance standpoint? That little...

31:38
Bradley Fischer: I don't believe so because that's not gonna be able to even have the ability to send back the information that's saying "I can do remote tag providers, here's the list. I can, I have these remote alarm pipelines available for you if you're interested." All of that information kind of comes back there. And I think with 8.3 removing module hot loading, there may not be a need in the future for these Gateway Network service enumeration calls to be scheduled at this one-minute interval that we have right now. Yeah, great questions.

32:16
Audience Member 2: Okay. So my system consists of, I have an 8.1 system and I have a separate development test and production environments, and they're all on 8.1, life is good. So down the road, we're gonna go to 8.3 at some point. So the first thing I'm gonna do is upgrade. And I'm on a heavily used EAM. So my development system, new tags get pushed to my test system, test it, then we push everything to production through the EAM tools, works great. So in the future though, when I go to 8.3, first thing I'm gonna do is update my development system to 8.3. So now I've got a development system that has my tag databases at 8.3 and a test system and production systems that are 8.1. And then at some point I'm gonna upgrade my test systems to 8.3 and my production will still be at 8.1. If I under... when you were talking about the movement from, the serialization to prophy, whatever the heck of a buff or whatever.

33:13
Bradley Fischer: Protobufs.

33:14
Audience Member 2: Right. So was it bus or buff? There's a ProtoBus too. The different kind of protocol. Anyway, the thing I'm asking is my system still gonna work? Am I gonna be able to have a development system that's at 8.3 using EAM to push tags to an 8.3 test system, but an 8.1 production system, does it, you said it falls back, so...

33:37
Bradley Fischer: Correct.

33:38
Audience Member 2: I'm gonna be able to have my cake and eat it too, hopefully?

33:43
Bradley Fischer: In short. Yes. Enjoy that cake.

33:45
Audience Member 2: Okay. Yeah.

33:47
Bradley Fischer: The idea there is that we're keeping Java deserialization in 8.3. So that we have that library and we're able to make those connections back to 8.1. But if you hear some of what Carl talks about when we get ready to introduce new version, new major releases of Ignition, 8.3 is the time for us to make this kind of change.

34:10
Audience Member 2: Sure.

34:10
Bradley Fischer: And so we wanna go ahead and lay the foundation. We wanna take that first step by adding Protobufs there. And if we have a brand new deployment where it's all 8.3 gateways, we can go ahead and take advantage of the Protobufs, instead of using Java deserialization. But for the exact case that you brought up, we wanna have Java deserialization there so that we can make those connections back to 8.1. We know there's a lot of people that are using 8.1, that's why the community is here where we all are hopefully using Ignition. And that's our current version. That's the one we've been working on for years. We wanna make sure that those connections can still be established and maintained.

34:50
Audience Member 2: Okay. 'Cause eventually when I get everything done, I'll move my production system to 8.3 and then I'm at 8.3 everywhere. And then I'll be getting ready for 8.5 'cause we're not gonna do even numbers. Alright, well thank you.

35:03
Bradley Fischer: Absolutely.

35:08
Audience Member 3: So I have two questions. So one is for the communication. So do we need any firewall permissions in place for the incoming and outgoing? And the other one is if we have the DMZ gateway configured with redundancy, and if the main gateway goes down, will we have issues, like thundering herd issues when a lot of connections try to move to the redundancy gateway?

35:35
Bradley Fischer: Do you wanna take the first one?

35:36
Joe Rosenkrans: Could you repeat the first one? I was focusing on the second one.

35:40
Audience Member 3: Just the firewall rules, do we need any firewall rules in place for the communications to work like the outgoing and incoming connections? For example, the enterprise will be sending some data to the plant systems, right? So do I need some firewall rules so that the data should be passed back to the gateway?

35:57
Joe Rosenkrans: Yeah, you're gonna need the firewall to allow the WebSocket to be established. That's actually one of the reasons that I mentioned the transition from an HTTP protocol to the WebSocket. If you look at it through Wireshark, you'll actually see the requests go out. It'll be your non-secure port, say, I would like to become a WebSocket. If you have SSL enabled, you'll see it transition from the non-secure to the secure ports. And those all need to be allowed in that firewall for that outgoing request so that that traffic can make it to your next gateway.

36:31
Bradley Fischer: And that's also one of the reasons that we have those outgoing and incoming connections. So in your example, or let's talk about that last architecture that we had on screen, any of those OT gateways would have outgoing connections attempting to proxy through the DMZ up to the IT gateway. That establishes the WebSocket, which is now two-way information. So we don't actually ever need to open up any ports on those OT gateways. We just need the one... Well, no incoming ports. Right? You might need to open up an outgoing to allow it to get to the DMZ.

37:10
Audience Member 3: Okay. Got it.

37:12
Bradley Fischer: And what was your second question there again?

37:15
Audience Member 3: The other question is related to the thundering herd issue. Like for example, if you have a lot of connections to a server, if the server goes down and it comes up, usually there's a lot of load onto that system, right? To have the connection reestablished. So will we face issue if we have a redundancy in place or... you're getting it right? Or should I rephrase it?

37:42
Joe Rosenkrans: No, I understand the question. I'm gonna take answer it in a very roundabout way so I apologize. But one of the recommendations that I've always given for let's say a hub-and-spoke architecture is to have the outgoing connections defined on all of your edge gateways where your hub has only your incoming side. And one of the reasons that I do that is a very extreme scenario, but since the outgoing side negotiates your WebSocket, you can get to a point to where your network card or your TCP protocol itself starts becoming under strain if it's managing hundreds of WebSockets, which would be your entire network's traffic or your architecture within Ignition. So in the dynamic you're talking about the load shouldn't be any different than like when you start up your master, when the backup comes online, it's gonna have those connections. Traffic's gonna come in and it should be interchangeable. But if you were in that scenario where it's the outgoing, everything coming in, or sorry, the hub has all the outgoing connections, I would expect an increase in load, I don't have any numbers to give you, because that computer is essentially building all of these WebSockets and negotiating the conversation before Ignition traffic can come across.

39:16
Audience Member 4: I want to speak about this line, "Due to the changes in store-and-forward 8.1 gateways cannot speak to 8.3." How does it going to impact the remote history providers between the gateways running on 8.1 and 8.3?

39:28
Joe Rosenkrans: So for a remote history provider from, you said 8.1 to 8.3?

39:34
Audience Member 4: Mm-hmm. With that one.

39:36
Joe Rosenkrans: Yeah. So 8.3 can store all of that history there. The issue becomes if you're trying to get an 8.1 gateway to store history for an 8.3 gateway because of the changes in store-and-forward the classes basically just don't come across. So your remote history provider in that case could query, but it could not be inserting data.

39:56
Audience Member 4: So we need to upgrade both the gateways?

40:01
Joe Rosenkrans: Ideally, but you have to keep in mind of your architecture, obviously if that gateway's not doing any storage, then that change shouldn't matter.

40:08
Bradley Fischer: Right. And this is similar to the Protobufs inclusion that we have. 8.3 just gives us the chance to make these kinds of fundamental changes. And in a similar way, we're making sure that we have the 8.1 compatible code available for us. Another question up in the balcony.

40:34
Audience Member 5: Yeah. So if you were trying to connect some of your gateways that are at the OT layer, let's say gateway 1-10 to gateway 40-50, would you have to set up 10 rules individually or would you be able to do some sort of expression for that? And how would that scale if you, say, had 500 gateways rather than 50 gateways?

40:58
Bradley Fischer: That's a great question. So one of the... Today you would need to set that up basically as individual rules. Now you might be able to, of course change some of the gateway names. So that's one of the other important things to think about when you go through and name your gateways, how that is gonna play into the architecture and specifically for these kinds of rules. So if you were able to name them for maybe a certain plant or a certain production line or an area of your facility, it might be easier to use a wild card to match those 10 and target another 10 somewhere else.

41:40
Audience Member 6: So after you've created that network connection between the two gateways, you have full visibility of all the tag providers and audit providers and other items. Is there a way to hide or not expose all those different tag providers to some of the gateways?

41:58
Joe Rosenkrans: So it's not so much as hiding them, but we do have service security that you can implement to prevent storage access, query access, or access to that subsystem altogether.

42:11
Audience Member 6: Does that only just do read and read/write, or can it completely not populate in certain lists on gateways?

42:19
Joe Rosenkrans: I know we can do read, write, and edit. I don't know...

42:23
Bradley Fischer: You also have the ability to turn off that service altogether. I've seen that done with pairing with MQTT. So MQTT could send your data, then you would actually use a remote tag provider targeting a specific provider, but you wouldn't turn on the ability to see any of those tags. The result is that only your alarm information is sent that way. So I've seen that as a pretty common method to have MQTT handle data while the Gateway Network is handling some of your alarming. So yeah, there's a lot of configurability too. I believe there's also the ability to go in and target specific tag providers when you go and set some of that security. We've got time for one more question, down here in the center.

43:05
Audience Member 7: Yeah. This question is about, Gateway Network's performance with high-latency, low-bandwidth networks. So can you comment about how 8.1 versus 8.3 is gonna be, or is there any more handles on message size that can be tweaked to tune the Gateway Network to work with such type of networks?

43:38
Joe Rosenkrans: So I can't speak on 8.3, but I can speak on the 8.1. We do have, on both sides of the conversation, we have the various ping timeouts. How many pings you can miss. I believe in the Ignition configuration file, we also have the ability to change message sizes, but it might not be a good idea to quote me on that. So we do have some configurations that will allow it to be more digestible for the stability of the WebSocket. And then you have to take into consideration, if it's a tag history query request, how long is the client allowing for that to be responded to? There's timeouts that are kind of spread out throughout the platform.

44:21
Bradley Fischer: Alright, well thank you very much.

44:23
Bradley Fischer: Hope that you continue to enjoy, ICC. Thank you again.

Posted on December 4, 2024