Inductive Automation Blog

Connecting you to ideas, tips, updates and thought-leadership
from Inductive Automation

Discussing Architectures Joanna Cortez Fri, 05/22/2020 - 11:23

Lauren and Shay invite Co-Director of Sales Engineering Travis Cox to walk us through architecting an Ignition system, and to show how Ignition can be used anywhere from a local HMI client to a full enterprise solution and everything in between. Travis shares best practices for getting started, building a foundation, asking the right questions, and building out an architecture. Travis begins with the structure of a basic Ignition architecture and explains the process of when and how to build out from there. We cover adding new functionality, adding redundancy, store and forward, Ignition Edge, MQTT, scale-out, adding a load balancer, streaming data to the cloud, and having visibility across an enterprise.

Hero
Discussing Architectures
Integrator Program Level
Thumbnail
Discussing Architectures
Wistia ID
ybb0qgh21q
Learning Series
Transcription
00:08
Lauren: Welcome back to our sales training videos. I'm Lauren.

00:11
Shay: And I'm Shay. Today we'll be going through how to architect an Ignition system, showing how Ignition can be used anywhere from a local HMI client to a full enterprise solution and everywhere in between.

00:20
Lauren: We're sitting down with Travis Cox, Co-Director of Sales Engineering, to talk best practices and more. Travis, thanks for joining us.

00:27
Travis: Excited to be here.

00:28
Lauren: We are excited to talk to you a lot about the different Ignition architectures, but I thought we would start with kind of a softball question?

00:39
Travis: Okay, sure.

00:39
Lauren: Or maybe more of a bowling ball question: We heard a rumor that you are a champion bowler?

00:46
Travis: Well, I have been bowling since I was nine years old and I have been a regional pro for quite some years. And I'm very much active in bowling leagues and tournaments and I have 45 300 so I'm... been … bowling’s been a pretty big part of my life.

01:01
Lauren: That's awesome.

01:02
Shay: Yeah, so with this, we're gonna be talking about architecting Ignition systems. But it's not just about picking out a diagram; largely, we'll also be looking at how Ignition can scale, right?

01:14
Travis: Absolutely, yeah, we're gonna show how Ignition can be used from anywhere from a local HMI, something really small, maybe just an historian or alarming solution, all the way to a full enterprise solution and everywhere in between. So, Ignition has a large, wide range of applicability here.

01:28
Lauren: Well, we're really excited to dive in, but we wanted to kind of start with the basis which is that knowledge of Ignition, that's really central to building out any architecture.

01:38
Travis: Yeah, absolutely. In order to really build architectures correctly, we have to have a good foundation of understanding of what Ignition can do and that really requires understanding what every single module of Ignition, what it provides to the platform and all the features that those modules provide; understanding the technical considerations there are; understanding user requirements; and of course, the environment that Ignition's is going into, as well. You have to have the full picture, so once you have that, then you can start putting the pieces together correctly.

02:05
Shay: So, with looking at building this foundation, where do we recommend people get started with Ignition?

02:10
Travis: So, we recommend getting started with a really basic architecture. And that would be to simply have an Ignition server that could be installed on a desktop, a laptop, a high-grade server, could be a virtual machine, something small, that has, a couple of modules for Ignition so it could be our OPC UA module and drivers to some PLCs to basically connect to PLCs, bring some data in, and maybe the historian just to log historical data to get some data into the database so we can see over time. So, start really, really small with just using a couple of modules for Ignition.

02:41
Lauren: And where would I put that server?

02:43
Travis: So that server could be anywhere from the plant floor, right next to the PLC, if you will; there's embedded PCs and people who have desktops on the plant floor at their desk there; could be all the way in the IT room in a server room, centrally; could be in a virtual environment; could be on-premise, could be in the cloud. Of course, typically it is on-premise for these systems, but it can be anywhere, it just depends on what makes the most sense for where they wanna put it.

03:07
Shay: So, once we get started with maybe a smaller project, like you said, maybe using it just for a historian or for alarming, what do we need to do to start building that out and adding more functionality?

03:19
Travis: Yeah, so once we have that server in place, essentially, if we have a couple modules and we're got a project configured, we're getting some success, we can easily then add additional modules to add more functionality to that server. So an example I used before where I just had the OPC UA server and the drivers to some PLCs and the tag historian, which is just a local historian, then we can add the visualization, so maybe Vision or Perspective and then we can start building out applications and providing that to people as a client anywhere within the facility, whether it's right there on the plant floor, or whether it's back at their desk, or even on their mobile devices.

03:51
Lauren: And let's say I have my system, but I wanna add some new functionality through a module, but I don't want it to affect my current system. How does that work?

04:00
Travis: Yeah. So, there's some people that have a system already in place and you could easily add a module to it, but they're worried about affecting that particular server, that instance, then they could simply just add an additional server. We can break up our modules onto different servers very easily; so here, I've got two servers: I've got the existing system and then I've got a brand new one with just that new module or that new version or functionality, and we will connect those together. It'll be one big system at the end of the day, but it'll be separated on two different machines so they don't affect each other.

04:28
Lauren: So, with the architecture that we're looking at now we're seeing a single point of failure. So, what happens if the server fails? Can I add redundancy to that?

04:35
Travis: Yes, that's a great question and one that we get quite often. Certainly, if I have one central server, and that's the one that's connected to all the PLCs and that's the ones launching out clients to everybody; if that server crashes, we're gonna be down. We're gonna lose data, we're gonna lose visibility, and that's not good, right? So, we got to have protection against server crashing or potentially network outages, things like that. So, there are two ways we can do redundancy. One would be software density, which is what we provide. So, we can have two Ignition servers, one would be the master, one would be the backup. Very simple to configure, we simply, after we get the master configured, we go to the backup, point it to the master and they'll be synchronized. So, at that point, if the master was to crash, the backup would take over automatically for us and we wouldn't lose anything in terms of data or visibility of the application.

05:19
Travis: Software density also protects us from being able to patch the operating system because if I patch the operating system, I have to restart the machine, and that way, if I had a redundant pair, I can move the primary to the backup, patch my master. Then once that's done, the primary will come back to master, we can then patch the back-up, and we're good to go. Another form of redundancy is harbor redundancy, which doesn't protect you against the OS patching, but it is the stuff that a lot of customers are doing, especially in virtual environments where they can have clusters setup and easily have a VM that would run across an array of servers in that case; so it's less licensing, but it doesn't protect you in all the cases.

05:58
Lauren: Now, what happens if I lose communication to a PLC from a central server?

06:04
Travis: Yeah, that's also a great question, right? Again, I have one central server, I have to rely on my network and I have to have communication to those PLCs. As I said, there's got to be protection against server crashing, as well as network outages. So, if I have a lot of hops to get to that device, that could be a lot of points of failure. We have stories of forklifts cutting the fiber from the main room to some other building, and if that happens, of course, we lose that data because we have no longer to be able to talk to the PLC. So, it can be really important for critical machines to move the connection or the polling closer to that device. And so, in this particular case, I've got using Ignition Edge here, right there next to the PLC so I can talk to the PLCs locally. Again, there's really a lot less risk for losing communication to that PLC right next to it, and I can do store-and-forward, and I can bring that data back to our central system so that way we never have any loss of the data.

07:03
Shay: Now with the edge-of-network functionality, we know there are lots of benefits to using MQTT. And we're seeing a lot of sensors that are starting to support that natively. So where does that fit into this picture?

07:13
Travis: Yeah, so in this particular case, there's a couple different ways we can bring that data from the Ignition Edge, up to the central Ignition server. So one of the ways is through our Gateway Network, which has services and we'll go through that in some more detail. But the other method is MQTT, where we can have all the PLC data being brought up, published up through MQTT, or we can utilize that for other applications besides Ignition. So if we take a look at this diagram here, that shows the same Ignition Edge that we have locally, so again, polling the PLCs locally, bringing the data back up through MQTT, but once we have the infrastructure in place, then that allows us to add additional new sensors, new equipment that does speak MQTT directly, and we can just plug it into our infrastructure. Especially if they have store-and-forward capabilities, as well, then we have it from our legacy devices, as well as our new devices, being able to leverage all of that in our application.

08:04
Lauren: Awesome. Now, what happens if I lose communication between my client and the central server?

08:09
Travis: Another great question, right? So if I'm out there on the plant floor and I have my client that is talking to my central server, we fix the issue whereof our data, we'll have store-and-forward there, locally, but then now our client would be rendered useless; it has to connect to that central server. So in that scenario, we gotta have something local, as well, if we're worried about our network. So instead of having those clients talk directly to the central server, we can have... You can use Ignition Edge, in particular our Panel product, which can provide a local HMI. It's a low-cost, one client HMI that we can have right on that machine, on that critical asset, to guarantee that visibility if we lose communication to our network. We still, of course, can open up clients anywhere from the central server. When that network is good, I can have a client open right there on the plant floor or anybody's office, but this guarantees that on that machine, the operator can walk up to it at any given time, and he can see what the process is and control that process right there.

09:02
Shay: What about talking about lots of PLCs or lots of tags. So essentially, how do we get into scalability with Ignition?

09:09
Travis: Yeah, so that's a good question. Ignition's licensing is unlimited, so that gives you not only the tags, screens, clients, device connections, projects, and more, so once we have that server in place, we can continue adding on to it. We can add more PLCs, we can add more people to look at that data. And that all is great, but we do run into hardware considerations when that happens. There's only so much we can run on a given piece of hardware. Now, if you're on a Raspberry Pi, obviously, it's gonna be a lot smaller than if we're on a high-grade server that's in our facility, so we do, at some point, have to consider the amount of data that's out there. And a kind of good rule of thumb is if you're over 100,000 tags, you might wanna consider looking at a more scaled-out architecture or utilizing more resources or at least pay a lot more attention to these numbers.

09:54
Travis: So in that case, though, we could easily, if rather than having one central server that does everything, both the IO and the front end, we could easily split that apart into two servers. We can move the modules that are responsible for the IO on one side, and then move the modules that are responsible for the front end on the other side, as we're seeing here. So I've got these two servers, I/O here and front end, and now we have dedicated resources to those two pieces so we can scale them up easily and have more that's available on each of those sides... So more clients on the front end, more tags on the back end. But again, still at some point, we're gonna get to a place where those servers might be overrun.

10:30
Lauren: Now, when you separate those two out, what happens to my cost?

10:34
Travis: Yeah, that's a great question. So if I have one server, it's gonna have all the modules on that server that I wanted to purchase and that'd be one license that they got from Inductive Automation. If I wanna split that into two, where I move half the modules on one, half module on the other, there's no change in price. All we gotta do is provide two licenses in that case, so it's a free way of being able to utilize more resources in that case.

10:54
Shay: Awesome. So can I add more I/O servers as my number of devices and tags is growing then?

11:00
Lauren: Absolutely, that's the great thing about it: Once we have started out this scaled-out architecture, we can easily horizontally scale each side of the fence, whether it's the tags on the IO side, or it's the front-end side. So here I'm showing one I/O server that's connected to a certain set of PLCs. If I then want to add an additional I/O server, no problem. We can bring that in the mix and we can bring all that into the same application server that we have so we can easily scale those out. If I have millions of tags, I'm gonna wanna have multiple I/O servers out there to handle all those tags.

11:29
Shay: And then what if I have hundreds of people who want to see my client?

11:33
Travis: So as we scale-out, we're not only scaling out tags, we're scaling out the front end, we're scaling out people that wanna look at that data on those clients. And so in this particular case, I have one application server, and you could probably get around a couple hundred people looking at that data the same time, but if I have thousands, we're gonna need to look at a different approach which would be to simply have multiple front-end servers. And in this particular case, we can put them in front or put them behind a load balancer, so everybody has one access point, which is the load balancer, could be an IP address or hostname to that, and then they can get access to clients, but we have the ability to have really thousands and as many as we want behind the scenes.

12:08
Lauren: So, Travis, you talked about redundancy earlier, but I'm not seeing any redundancy on this particular architecture. How would we go about adding redundancy?

12:17
Travis: Yes, that's a great question. So really, anywhere we put Ignition, we could have a redundant backup for that. And so we do have to consider where we should have redundancy and where we shouldn't have redundancy. It does, of course, increase the amount of servers we have and increases the cost when we look at redundancy. And so I left it out of these diagrams because it just makes the diagrams a little more complicated, but I could have redundancy here on the Ignition Edge side... I can have two servers out there locally, one that's a master, one's a backup, no problem. I can also have it, of course, on the I/O side here where I can have redundancy on the I/O and typically that's where people are doing it, on the I/O, whether it's local on the critical machines, or it's on the central, they usually put redundancy on that side 'cause you don't wanna lose data and you wanna make sure you're logging all of that.

13:00
Travis: On the front-end side, however, you may or may not want redundancy. It may be okay that if the server crashes, the applications down for a few minutes, no big deal. But it may not, so with some of our very critical applications. But if you look at this diagram, I would place redundancy on the Ignition Edge side on the I/O servers, but in this, here we have these three front-end servers that are behind the load balancer... That actually already is redundant. It has high availability in this case, so if I were to lose a front-end server, no big deal. All the clients would switch over to the remaining two that are there and I could have a lot more of these. So actually in this scale-out, we do get redundancy automatically with the front-ends having multiple of them, but we do have to consider redundancy on everything else.

13:41
Shay: Why are front-end servers behind a load balancer treated differently than I/O servers in regards to redundancy?

13:47
Travis: The answer is actually pretty simple: It's because they have different use cases. So the I/O servers are talking to a set of devices and they have tags, and that requires state so we have alarms that are happening there, long-running processes that are going on and we only want one set of those, so it has to be redundant for the I/O side. On the front-end side, though, we can have as many of these as we want because it's stateless. It's just an application and the application is getting the data from the I/O over at Gateway network or from the database directly, so we have high availability here and I can have hundreds of those servers that are starting up the same application at the end of the day.

14:21
Shay: So we have customers that have different OT networks from their IT networks and so when those cases were often implementing a DMZ layer, what does that look like with Ignition?

14:31
Travis: Yeah, so in these diagrams that I've been showing, the clients that we have are typically opened on the OT side of the fence which is usually there's an OT Network and then there's a business network or an IT network that's there and often they don't cross; There's firewalls that prevent those kind of things. And so, it does limit if, where I can actually open up the client, because of the network that's out there. So typically I can open the clients anywhere on the OT side, but then the business wants to get access to the data, licensing model for Ignition's unlimited so why not? The network gets in the way. And so in this particular case, it can be important to add Ignition on the DMZ side.

15:09
Travis: So if you look at this network here, this diagram, I've got DMZ where I can install an Ignition server with the visualization, so that can be Vision or Perspective, and we can connect that server... So it's an additional server with an additional license, unfortunately, but we can connect that server to the OT server so we can allow that. And using Ignition's Gateway Network, we can have all the same data. We can have the same application that we have on that side now available to the business so they can have lots of people out there getting access to it. And this is the most secure way of doing it. There are companies that will just simply open up firewalls or allow port forward, things like that, to get access to it and that's perfectly fine, as well. That does take advantage of Ignition's licensing model, but this allows it for the best security possible.

15:51
Lauren: So we've been focusing on visibility at the site level. What happens if I want to have visibility across an enterprise?

16:00
Travis: Yeah, it's a great question. So all the architecture diagrams we've shown so far have been... Have all those servers at that site, on-premise there. It would be able to handle that functionality locally. And it's true that I could put all that stuff at the corporate level and I can use Ignition's unlimited licensing model to talk to all devices across all different sites. It’s just not done in practice because, traditionally, that is over the WAN connection which could go down, right? We can't guarantee that connection from corporate to all the sites. So usually, we're talking about deploying a set of Ignition servers locally but as your question there is, we wanna get data visible centrally, as well. And so that's where, if we had a lot of different sites, we can bring it up to a corporate level for visibility of all that data, as well as management of all the Ignition systems we've got out there. It's easy to manage one Ignition installation but if you've got hundreds of them, you've got a lot of things to back up. You got a lot of things to consider. So when you look at a full enterprise solution, you wanna make that part really simple. You wanna have people in the corporate side to see data across all their locations and to make it easy for IT to manage the system.

17:06
Travis: So here, if you look at this diagram, I got two sites and I can have many, many other sites in there but that's got the Ignition systems whether it's the Edge products locally, whether it's the full Ignition server at that site or a scaled architecture at that site, whatever it may be... Each side could certainly be different than others, you have to look at the requirements of that site, but we can then connect those sites up to a central Ignition gateway, where we can look at the data. We can see all the live values of tags with our Gateway Network, we can see the history, we can pull history from those sites. We can easily mirror data, historical data, from those sites to a centralized database, as well, so we can get visibility there at the corporate level. We can see all the alarms that are happening across all the sites, a lot of visibility there. But more importantly, we have the management so we can use Ignition's enterprise administration module to centrally take backups of all those servers, to check the health and diagnostics of those servers, to be able to essentially manage a license, to remotely upgrade those servers and more. There are a lot of tasks that we can run with that, so it just makes it really easy to bring those up to a centralized system and to get that visibility and get that management going.

18:17
Shay: So what happens if I have a DMZ between a site and that corporate layer?

18:21
Travis: Yeah, so that's a good question. A lot of people, again, the DMZ,those layers, they want protections in there and so that the OT side would very much be that lower layer they'd wanna protect and the business side would be an IT side, higher layer and they don't talk to each other, although they can through a DMZ, right? But the business side cannot go to the OT side directly and the OT side can't go to the business side directly, but they can both talk to a middle layer. And so, that's a very common approach whether that's at the site level or whether it's between the site and corporate. It'd be really easy, in this particular case, to introduce Ignition in the DMZ, that really is gonna act as a proxy, between the site and the corporate location so that all of the services in management I was mentioning can just funnel through that proxy. So we can still get access to that and they get all the protections that are in place of a DMZ.

19:08
Lauren: Now, what if I want to leverage the cloud or even inject data into the cloud?

19:13
Travis: We can do that really at any layer, whether we do that at the site or that we do that at corporate. There are modules in Ignition that deal with getting data to the cloud. So we can utilize MQTT, for sure, we also have direct injectors to all the major cloud platforms, so whether it's Azure, or AWS, or IBM Cloud, or Google Cloud. Any of those, we can get that data, stream it up to the cloud and work with it. And so, that just put data up there and utilize their services. But keep in mind that we could also deploy Ignition to the cloud. We can have a hybrid approach, where we have our centralized corporate system and management in the cloud and we have all of the servers on site, locally there, so that we can guarantee that functionality right there at the site.

19:56
Lauren: As I go try and help customers architect systems, what sort of questions should I be asking?

20:01
Travis: A million questions. Unfortunately, we have to really probe the customer for these details. It really starts with first understanding their requirements. So what sorts of applications are they trying to build? Are they... That would determine what modules for Ignition that we need. And we also have to look at that network in a lot more detail, typically get involved with IT, we gotta understand what the layers look like. Are there DMZs that we have in here? Are there firewall considerations that IT wants, or security concerns that are there? And we have to know are there any points in that network where we could have the link cut? 'Cause that of course is one of the biggest considerations of architecting, is network failures.

20:40
Travis: If we have that, we've gotta put more robustness in, which means Edge products closer to PLCs and the redundancy, those kinda things in place. So we really have to understand that network. We also have to understand, not only the functions they want, but how big is this? What kinda devices are we talking to? What type of devices are they? How many tags are we dealing with in a day? How many clients, how many people wanna look at that data? What does it look like today and what's it gonna look like five years from now? Because I don't wanna put a system today that's gonna hinder me five years from now when I wanna scale that and make it bigger. So unfortunately, there's a lot of questions we have to look at, what's not only what they need now, but what they need in the future and sort of architect around that because there are considerations, especially in how we build projects for those kind of things. So unfortunately, there's a lot of questions, but you get used to them after a while and it gets a little easier as you go forward.

21:31
Lauren: Well, Travis, we're so glad we got to sit down with you today, thank you so much.

21:35
Travis: Thanks for having me.

21:36
Lauren: Any final takeaways for our viewers?

21:39
Travis: Yeah, so there's a lot of different ways we can architect systems, and it really comes down to having fundamental knowledge of Ignition, the modules, the features that are there, so once we have that foundation, we can then put an architecture in place that protects us from things, like network loss. So we really gotta look at the network and communication issues, and also the sheer size. How big is these systems gonna get? Are they gonna grow? Because those are two big things that will help us in how we architect that from the beginning, and do it correctly.
Series Order
15