Context is the Key to Unlocking Data
Inductive Conversations Podcast
66 min video / 58 minute readDaniel Voit and Keith Weerts of Blentech join Paul Scott to discuss the importance of context when it comes to data. They dive into how Daniel and Keith started on this path to unlocking the power of food production data, how Ignition played a crucial role in their development, and how companies can fully utilize their equipment in a short amount of time.
Bios:
Daniel Voit
Daniel Voit is the CEO for Blentech Corporation – a world-leading supplier of custom, automated and engineered food processing equipment solutions for the prepared foods, ready-meal, and restaurant commissary businesses. He has also served as President, COO, VP of Engineering, Technical Services Manager and Applications Engineer. He has a broad-basis understanding of all aspects of manufacturing and food processing. Under Daniel’s leadership, Blentech has expanded its product line to include advanced automation solutions for industrial food production applications, such as the Cooker Cloud remote reporting service and the recipe analyzer system.
Prior to joining Blentech, Daniel held positions in R&D with Frito Lay and Quality Assurance with Norpac Foods, and worked internationally consulting in Central America. While a graduate student at UC Davis, he worked on a NASA-funded study to design and build a multi-purpose fruit and vegetable processor for a manned mission to Mars.
Within the industry, Daniel is an advocate for Food Science as chairman of the UC Davis Bio and Agricultural Engineering Leadership Board, past chairman of the UC Davis Food Science Leadership Board and past chair of the Prepared Foods Council of FPSA.
Daniel holds a MS degree in Food Processing from UC Davis and a BS degree in Food Science from Oregon State University.
Keith Weerts
Keith Weerts is the Chief Technical Officer for Blentech Corporation – a world-leading supplier of custom, automated and engineered food processing equipment solutions for the prepared foods, ready-meal, and restaurant commissary businesses. Early in his career, Keith worked as a process engineer and a manager working for multinational chemical companies. He regularly interacted with the research staff to design processes, products and applications for sale to the Food, Pharmaceutical, and Personal Care industries.
Later as the Business Development Manager for Cognis, Keith worked with 47 Ph.D. researchers to turn their ideas into business opportunities. Major accomplishments included systems for remediating mines and recovery of copper from printed circuit board manufacturers.
For the past 25 years, Keith has been focused on automating and collecting data from systems to guide business decisions. As a result,he has developed programming skills including SQL, Python, Jython, and Business Intelligence software.
Transcription:
0:00:10
Paul: Hello and welcome to Inductive Conversations. My name is Paul Scott, joining me today are two great guests from Blentech. I have Daniel Voit, who serves as the CEO, and Keith Weerts, who's the Chief Technical Officer. Hey guys, how are you doing?
0:00:23
Daniel & Keith: Doing great. Thanks for having us.
0:00:26
Paul: Alright, let's start the conversation with you two. Daniel, could you please give us a little bit of background on yourself, and maybe tell us a little bit about what you do?
0:00:33
Daniel: Great. Well, you've already given my name, it's Daniel Voit. I'm CEO here at Blentech. Blentech, we manufacture, we design, engineer, manufacture, industrial food production equipment. It's big stuff, things that make 5 - 10,000 pounds per hour, pound batches of machines making foods for restaurants, school lunches, foods you find in the stores all around the world. I myself, I'm a food scientist and a food engineer by background, I've always been interested in cooking, I've always been interested in feeding people, and this is a great way of bringing my interests in technology as well as food to add value to the world.
0:01:19
Paul: That's great. Thanks for coming, I appreciate it. Keith, same thing, you want to go ahead and give us a little intro on you.
0:01:23
Keith: Certainly. I'm a chemical engineer by background, and I've been in the industry for a long time, I didn't get this space by being a kid. So anyhow, I've got into the development of software for integration back in the '90s, and I sort of grew up in the times when your leader [Steve Hechtman] was in the industry, developing and moving into Ignition. And actually what I did is as an integrator, I worked with Blentech a lot, and a number of other companies, and I came in contact with Ignition in 2010, and fell in love with it. And so what happened is I developed my chops on Ignition during that time period, and then Dan and I got together in about 2017, and we recognized an opportunity and that's what we're gonna talk to you a little bit about today.
0:02:20
Daniel: Well, in fact, Keith and I also go back to about the year 2000 is when we did our first project together, and we developed, and automated and developed a continuous stir fried egg fried rice line that we put in, in the UK. It was one of my first projects in the industry, Keith had been at it for a bit longer, but it actually kind of serves as a foundation of, we know what it's like to automate equipment with the technology that existed before what we're talking about here today in your products.
0:02:58
Paul: Alright, cool, so we got a background on you two. Can you talk a little bit about Blentech, maybe what the company does? What it is all about?
0:03:06
Daniel: Sure, absolutely. Well, we were founded by the inventor of the mechanical grape harvester, and in 1986, in fact, it's why we're in Sonoma County. Was originally, the company was gonna make winery equipment. Darrell, the founder, hired a number of folks and they said, "Hey, Darrell, not everybody drinks, but everybody eats." And he said, "That's a good point." And so we got into industrial mixing, material handling equipment, but we got our break by moving into cooking equipment. We found the opportunity to cook products like meats and things at scale, right around the time in the '90s when people were becoming quite a bit more aware about the risks of food safety, the concerns that can happen when things like E. coli get into industrial meat, as people who can get sick, cooking processes in food are generally considered a critical control point, and as a result, it's an opportunity to help make food safer.
0:04:13
Daniel: As a company, what we do these days is we design, manufacture, automate, integrate, and start up these industrial systems. We mostly focus on cooking and cooling, so it tends to be the types of foods that you would make on the stove top, not so much the things that you would make in the oven, or on the grill, so it's things like fillings, and soups and sauces. But there are foods like that that exist in every culture, and folks need those things all over, so we're end-to-end providing those solutions, whether if somebody wants just an individual machine or a complete line. That's what we do.
0:04:57
Paul: Awesome, well, thanks for the background. Appreciate it. Now, when we were talking to you guys a little bit earlier before the podcast here, I heard there's this great story about flying cross country in airplanes, and data... If you know what I'm talking about here, could you give me... Give the audience a little bit of a background on that story and what's going on there and...
0:05:11
Daniel: Sure, sure. And it actually goes back to that story that I mentioned, that Keith and I had done the system in Northern England in about 2000. It was a rice cooker, and back then we had email and things, actually a lot of problem solving happened with faxes back and forth as well, but it was pretty common for you to get a phone call or a fax for that matter say, "Look, there's a problem with the way the machine is running, what are we gonna do? How do we fix it?" In this case, it was, "Hey, the rice isn't cooking properly. We need it running, our customer needs these rice orders filled, we gotta get it fixed. Listen, you got four hours to fix it remotely, and... " Obviously we didn't have remote connections, right, "but you got four hours to tell us how to fix it. If not, you're getting on an airplane."
0:06:05
Daniel: Well, you know, we'll explore over... We didn't figure out how to fix it in four hours. We'd asked questions saying, "Hey, what's this reading? What's that reading? Tell us what you're seeing." So I get on an airplane, I fly out there, I walk in there, I see that the temperature control is not under control because there's simply a cap that's missing on the back of a machine. I put it on, tighten the clamp, sit down, 30 seconds later the rice is coming out fine. I sat on a bucket there, and watched that thing after I just flown halfway across the world to put a clamp on a cooker and thought to myself, "This is really not how this should go."
0:06:46
Daniel: Keith and I have talked about this a lot. I'm sure I called you up and complained about it, Keith at the moment, but I personally, I've been involved in about 700 projects or so, and we don't have problems on most of them, but every once in a while something goes wrong, and it would always be the same. I'd hop on an airplane on short notice with a stopwatch, a temperature probe, and a piece of paper. I would almost never, if ever, find a problem with the core engineering, it was usually how it was being used. And basic data could have solved that. And it's very hard not to think, "We need to find a way to identify these issues without creating this drag, this delay, this cost for everyone."
0:07:32
Paul: Yeah, I imagine the actual having to fly slows down things a bit, or maybe kinda ties you up a little bit. Would you say flying, or travel in general, kinda has a pretty large impact on what you guys do, or maybe even within your industry?
0:07:44
Daniel: Well, I mean look, it's an annoyance, but that's not really the issue. I don't mind hopping on an airplane and getting where you need to go, but the real issue is the economics of it, both for the businesses and not even so much for us, but for our customers, as well as... Look, the global impact. If you wanna look at it from that perspective as well. But if your typical machine is running 5,000-10,000 pounds an hour, and those products are selling $2-$3 a pound. Every hour that it's not performing, someone's losing $10,000-$30,000 in potential revenue per hour. You can look at best in airplane, even if it's in the United States, it's gonna get you there in a day, when you consider all the travel costs, but these types of costs are baked into the overheads of organizations, and we all take it for granted that this is just the cost of doing business, but it doesn't have to be... And that omits the environmental impact that you have from having redundant industrial systems or the cost of the fuel that brings you to and from it, it's just the cost that we all shouldn't have to bear.
0:08:55
Paul: So you kind of mentioned a little earlier that just being aware of how people are doing things or what they're doing, sometimes that's sort of the core problem, or maybe something that needs to be sort of identified or monitored. Can you speak a little bit more to how contextual data can make an impact?
0:09:10
Daniel: Yeah, I think that's it. Because when you're on the phone back in the year 2000, trying to ask why the rice isn't right, you're spot asking questions, and you're trying to figure out, "What could it be?" You're trying to diagnose a patient without seeing it. You don't have the inputs to solve it. Now look, a lot of times you could solve it, but most of the time, it took longer than it should have, and then that's where those costs came in. The context is, what happens when you do arrive, when you do fly there, the first thing you say is, "Can you walk me through what was happening?" The next thing you ask is, "What changed?" It's always the same questions. And the thing is, that context is subtracted from the data set, so even if you connect a system and have it streaming, you have to know the context of: "Who made what decision? What were you trying to do in that moment?" to determine what a resolution or a change might be to improve something in some way. That makes sense?
0:10:23
Paul: It does, yeah.
0:10:24
Daniel: Yeah.
0:10:25
Paul: So I'm gonna break the fourth wall a little bit. The next question we had on the outline sent to us was, what inspired you to see that context is important from data? I kind of feel like you touched upon that already. Did you have more you wanted to elaborate on there, Dan, about that topic?
0:10:38
Daniel: Yeah, right on. So the way this kinda came to be, and Keith, you mentioned this. Keith was standing actually here in this office, and I was saying, "You know... Hey, it's really frustrating that we still haven't found a way to have the information that solves all these problems quicker. In today's day and age, why can't we get connected to our machines remotely?" And Keith said... Well, Keith, what did you say?
0:11:10
Keith: I said, "Well, we can... It's really not a problem, it's just getting people to open up."
0:11:16
Daniel: Right, and so I'm a food scientist and food engineer, so I don't know how to do that. I do know what to do with the data once I have it, but I don't necessarily know that aspect of it. So we set up something that we called the BRS. Is that with the original term meant?
0:11:33
Keith: Blentech Reporting System.
0:11:36
Daniel: And the Remote Monitoring System, something like that. And we were so excited. We got all this data coming across, and we're like, "This is gonna solve everything." It didn't. In fact, I think it was neat, a few customers initially got... Let us get connected. We set up that quick framework, but we would get the calls with questions, and they would say, "We're having this problem, we need more throughput, or we need this... There's a problem." And now we're sifting through mountains of data, and it just didn't have the context. As an aside here, and another aspect of my life, I used to own a CrossFit gym. I ran a lot of obstacle course racing, a lot of Spartan races, and I spent a lot of time staring at my fitness tracker, and has a Garmin and... So I set it up to program my runs, and this morning I did a progression run where every mile it beeped and said, you know, "Time to step up the pace."
0:12:41
Daniel: So it's kind of like automating my workouts. Well, this thing connects to a platform called Strava. Strava pulls that data up and it overlays context. In that case, it overlays the GPS data, it overlays segment... Segmented data, and I would sit there and I would look at my fitness activities, and I'd look at my heart rate or my speed, and I would think to myself, "That looks a lot like a temperature profile on cooking food. How come we don't have data in a context like this?" Obviously not the same context, but relevant to what matters in industrial food production.
0:13:23
Paul: Awesome, alright, so you guys have two platforms, and we'll take a look at some of them later today, you have ARTIS and you have AutoChef 2.0. What inspired you to make those? And how did Ignition help with that?
0:13:37
Keith: Okay, so what happened is, like Dan said, we needed contextualization. And we had an AutoChef 1 from 20 years ago, it was written in PanelView and it was sluggish to be exact, and it was also... It needed a lot of work. So it was time to come up with something new. And it was, when we were looking at this, we said, in the need for contextualized data, you can't give operators the option of running recipes, you need them to run recipes, and when you pour some introducing recipes, then the data becomes contextualized by the recipe step you're in, and now you have a lot more information. So that was the basic premise for doing AutoChef 2.0. We designed it around ISA-88, which is a batching system, ideal. And that's how that system was built. And at the same time, now that we had contextualized data, ARTIS came into existence as a way of collecting that data and putting it in context in advanced tool form.
0:14:45
Keith: In other words, I can't... I mean, operator floor, the HMI is not the place to put the analytics that you're trying to do. The operator is just trying to make a batch, so that's what they should be doing, but someone else should be looking at the data and using all these powerful tools that exist to actually help them improve their productivity. So that was the germination point for us.
0:15:11
Paul: I see, awesome... I'm glad you explained AutoChef 2.0, I was gonna ask, I was curious like, "I wonder what the first iteration was." So that's great, thank you. So, okay, so you have ARTIS, you have AutoChef 2.0, how do those two interact with each other? How do they integrate with each other?
0:15:26
Keith: Well, okay, so taking Dan's example of the fitness tracker, this is perfect, because he described it, and as he describes it, I can tell you what you have is, the first part is a recipe. So you have to have a good system to build a recipe. Dan builds a race on his Strava, he builds what he's going to be doing, we build a recipe for how we're going to cook mac and cheese, and then we take that recipe and when we run that recipe like he runs his race, and you see the mac and cheese going through its various steps, and every time a step occurs, something is happening, we're instructing the operator to do stuff, "add this, do this... " or the machine itself is doing things.
0:16:07
Keith: But because we know what step we're in, we know what's supposed to be happening... What's supposed to be happening. And on top of that, if the machine is equipped with load cells, or other telemetries, we can see more of what is going on, so we have confirmation that things are happening in the correct time period, and there is no divergence from the race. So like if Dan stopped to take a break, we'd be able to see that, if an operator doesn't add the ingredient at the right time, we'll see it in the weight, and we'll know that this was not done at the right time, and we can actually catch that and report it and say, "This wasn't done at the right time, or this step took too long."
0:16:47
Keith: We'll get into this later when we actually do the video portion of this demonstration where you can see when the operator is taking too long to do any given step. 'Cause we're collecting statistical data on every step, every time they run this recipe. Just like Dan does the sprint up the hill, we can see on days that he has a good day and we have days that he has a bad day, and we can actually alarm the operator, or their supervisor and say, "Hey, they're having a bad day, and they're going a little slow on the steps." So that the supervisor can head out to the plant and say, "What can I do to help you to improve the throughput today?" And it may be all multitude of things. And we're not worried about that. We're worried about helping them identify the excursion in the process.
0:17:35
Paul: Okay, well, hey, can you walk us through a little bit how you'd... Say you have a startup, new company, a new organization, whatever, how can you get them from that sort of initial state to up 100% capacity within a relatively short amount of time? Can you kinda walk us through what that looks like?
0:17:51
Keith: Yeah, I mean, Dan and I have had a lot of conversations about this recently. We have... We have a few people in our organization that we're saying, "Hey, it takes too long to start up our software." In essence, it takes about a week to start up our software, to really train people on how to use it. And when you compare it to a really basic software, which is, turn on the heat, turn on the agitator, a regular operator can do that, can learn how to do that in a day, and they say, "Well, we can start using this software in a day, yet it takes us a week to start using your software." And the difference is the complexity. Yes, our software is more complex, but the major difference is, if you run a machine without taking advantage of the automation, you're probably never gonna get to more than 50% of the capacity of that machine.
0:18:41
Keith: What our software does is, yes, it takes us a week to get to 50%, but we keep on growing in efficiency, and it keeps on using all the data that's being collected and guiding them to say, "You can be better. You can do this faster, if you do these things." And so things we talked about is our batch comparison tool, which compares two batches side by side to show you differences, or where we do the statistical analysis of every step and we can show where there's a large divergence in proficiency. We take all that information and slowly over a three to six month period, we can build that machine up to 100% of capacity. Sometimes it happens in a month, it depends on the customer, everybody is different. But we can identify where the losses in productivity are occurring. We used to think that how fast we heated our machines was critical, yet still important, but what we find is there are a lot of other things that are actually causing batches to be made slowly, and I think Dan could probably chime in on this too.
0:19:46
Daniel: Yeah, I could. And I'd also like to say, on one end of the spectrum is the basic machines where you can start and stop things, and it's pretty easy to turn those things on and get them rolling in a day or two, you know, on the other end of the spectrum, was a trend that I actually think is reducing in frequency, which is the large custom-built architectures where somebody is building their own custom recipe engine, which... I've seen people do it, and I've seen them be successful over time, but I've also seen some of those systems take months if not years to get up to speed as they work through the bugs of the 0.0 rev, get up to the 0.2 or 2.0.
0:20:38
Daniel: So what we're recognizing and what we've done is, that there is a standardizable approach for recipe control that is modular and deployable, that gives you the advantage that you could theoretically get from a custom architecture, but you can get it in fractions of the time. So it's kind of the best of both worlds in that way, but you know... Look, absolutely. Keith mentioned that when the speed in which things heated up, when I started in the industry, 23,24 years ago, I started in the technical sales arena, and they said, "Hey, Dan, we're gonna go to trade shows, we're gonna talk to people about the value of our machines." And I said, "Great, let's do it. What are the values of the machines.
0:21:21
Daniel: They said, "Well, they cook 35-40% faster than a kettle." That's great. That means they can cook more food. So look, I diligently went to the shows, and look, it's true, it can absolutely heat up 30% to 40% faster than a kettle. It's fantastic. But the thing is, in a lot of foods, that doesn't matter, because the speed in which you're adding things, the sampling of the quality control... Now, look, I don't wanna walk away from the extra five or six minutes of production you can get from that, but a well-architected recipe control system can get you just as much. So at Blentech, we believe that we're in the business of providing capacity. Our goal is to provide access to the equipment that creates high quality foods at high speeds. If we can find a solution that includes software on legacy equipment and get them more production so they don't have to buy another cooker, that's a win. That's a win from our perspective.
0:22:33
Paul: Yeah, thanks. Alright, let's change things up a little bit. How have you too, since you have sort of platforms you're trying to provide to your customers, how do you approach security? And has Ignition sort of helped with that?
0:22:47
Keith: Well, security is paramount, it's probably the thing I think about more every day than anything else I do, and Ignition has been really key to that. There's various levels of Ignition, and I'm not going to delve too deep in our platform because one of the keys to good security is to not talk about what you're doing too much, but I can tell you that the identity provider platform that's now in Perspective is extremely important to us, using key fabs for us, we use either Google Titans or YubiKeys to sign in, nobody gets on our platforms without it. Our customers know, they don't need that, they have their own logins, but for those of us who are developers, who have basically the keys to the data, we need to be able to make sure it stays extremely, extremely safe, it cannot be touched by anybody. So I think we really have that nailed. By the way, I'm really excited...
0:23:52
Keith: It was a good plug. I think it's 8.1.16 that just came out. Was that the new one that just came out? The additional login that you have in there now that you can have a supervisor authenticate a step... I'm excited about that, that's gonna roll right into our equipment because it's just a nicer way of doing it that we didn't have before, so I'm looking forward to putting that in.
0:24:19
Paul: Yeah. If I can nerd out with you a little bit. I'm excited with that too, no, I saw that feature I was like, "This is great, we've been wanting something like this in Perspective for a long time." I'm glad to hear that you're excited about it as well. Yeah, I'm sure it's gonna help out quite a bit on the security front for sure.
0:24:32
Keith: Our team... My other team members got back to me right away, "Did you see this?" So yeah. We are pumped about that. Yeah, security, Ignition has been really helpful in that area, and I just couldn't imagine... We started out with doing Vision, and we're trying to do Vision in the cloud, and of course you can do that, but it's just not as secure. I couldn't secure it. And when Perspective came out, I had committed to... Talked to Dan and said, "I really wanna commit to this on our cloud-based platform, because we have to have this level of security that's going to come with it, and a way of tying it down, so that was really important for us to... When Perspective came out to be able to shift to that. And the other thing is, in the world of security is MQTT. MQTT wasn't really ready for prime time on Ignition five years ago, and so we didn't actually go with it for a long time, but we have been doing more and more of it, and it is in much better shape, and very easy to use now, so that is becoming the de facto method for handling a lot of our data transmission.
0:25:47
Paul: That's awesome. I'm glad to hear it. That's great. Alright, so what is Blentech's business approach? And what tools have you developed to achieve those goals?
0:25:56
Daniel: I'm trying to figure out how to attack this question here directly, 'cause there's a lot of ways we approach the market but, we work with the customers from... They had this initial inquiry where they'll call up and they'll say, "Hey look, I wanna make several thousand pounds per hour of some product," it could be a, "Hey, I've got a special mac and cheese, and it's really important recipe, we've got this loyal base, but we need to make a lot more of it, how are we gonna do it?" Or it could be somebody who wants to bring something new to the market. We work with them from the beginning to actually conceptualize the system, but then also do testing. Here in Santa Rosa, we have a test and innovation center that we have a demonstration equipment permanently installed connected with these software solutions, and we actually use the software as part of the development to define and develop those processes, run the testing, automatically create the test reports that give us all the data we need for scale-up.
0:27:02
Daniel: So at the core of this is people need to see it, they get peace of mind through proof of concept, but then that data we get at the testing, we need to be able to, as engineers, help them use that information or use that information on their behalf, to scale that equipment up and figure out what the throughputs are gonna be at full scale, so that their business plan comes to fruition with the appropriate cash flows, and the appropriate time frame, and so on and so forth.
0:27:32
Daniel: So from there on out, we get involved in actual system engineering, design of the equipment, the manufacturer of the equipment, and then physically go on-site in most cases, and start it up and train them, and then our equipment is in use for, usually, decades. And so we're providing support, and parts... On some of the equipment, some of them have almost no parts at all that need to wear, but for the life of that equipment, and keeping it rolling. So yeah... That's kind of the approach is, help them out from the beginning, all the way through that journey. And it's worked out real well.
0:28:14
Paul: That's great. Kind of keeping in theme with that, can you talk a little bit about the potential growth from a start-up to a mature manufacturing facility, and how different customers are served?
0:28:25
Daniel: Sure. Well, I think that the architecture of what we're building with the AutoChef 2 and ARTIS is probably the angle to talk about that. I'm gonna talk a little bit about how those businesses tend to grow and evolve and what they have to be able to handle the market, and then Keith you could talk about how the architecture we have services that. It's pretty common for folks to introduce some new product in the market, and of course, a lot of new products don't make it in the market. I mean, there's a lot of great product developers out there, and I've heard different statistics ranging from 50% to 90% of new product launches aren't there within a couple of years, but the resilient organizations are adaptable and flexible, and what you tend to see is a few of their SKUs do make it through.
0:29:13
Daniel: First they get that regional launch, then they get that national launch, and then it grows in scale with multiple outlets selling their product, and it moves from one machine to sometimes 18, or 20 machines, and during that period of time, they're also changing their formulations, they're adapting to tastes of their customer base, because, hey, we like to try new foods. And when we're walking through the stores and we see something a little bit different, we say, "Hey, I'd like to give that a try." Not a lot of us eat the same thing every day all of our lives. So it's about flexibility and scalability. And I believe we built an architecture that really allows folks to have the best of both of those worlds. Keith, can you elaborate a little bit upon that structure.
0:30:03
Keith: Yes. There is actually a few parts to that. First of all, that goes back to having a good recipe builder, so the recipe builder needs to be flexible to handle all the variations that a customer might wanna do. Obviously, within the constraints of the machine. The machine can only do certain things. But of course they can always add... Ask us to add on features, we can add on features, but the recipe is what keeps it all steady, but you need to be able to change recipes, you need to be able to say, "Oh, I wanna do a different recipe today to try out this." And this is where you get into development, and we do a lot of help with our customers on developing their recipes to help them produce the product that they want. The idea is, always you wanna be able to track your recipes, and so one of the features we built into ARTIS was as you're building recipes for applications, and for testing, we keep an archive of every recipe that's ever been built so that we can go back in time and find out who changed what when, because that is the bane of many a company, they don't know...
0:31:08
Keith: Well, first of all, if you're not, you're doing a recipe, you don't know what they're doing, and if you are doing and following a recipe, you don't know who wrote the recipe, or when it got edited. And so what we try to do is build all that into the system, is a tracking of recipes. Now, one of the things that Dan talked about is, these customers have a tendency to grow, and if they get a winning formula for a recipe, they might wanna make this product in multiple locations, and so then then the question becomes, can you share that recipe among the locations, which is another capability that we have, so that you can move recipes between locations as needed. That's sort of a more special application that isn't used by a lot of people, but it's something that I think we're gonna see a lot more of as time goes by. The larger companies recognizing the ability, the need to be able to move recipes between locations.
0:31:57
Keith: But just at a single location, where we have a customer who might have four machines. The fact is you build a recipe for all of them, and then you just move it between the machines as necessary, because you can move, you can say, "Today we're gonna build... We're gonna run this recipe," and then the operators on all four machines can be pulling off of that recipe and using it simultaneously, not in phase obviously, but can take advantage of it. Plus, we give them the ability to say, at the end of the day, if you're running out of an ingredient, you can scale it down, say, "I wanna... I only have this much material left, can we make 70% of a batch?" Go ahead and tell us, you can only make 70% of the batch, we will respond to that and adjust it to allow you to make just that final batch of the day, that way you don't waste ingredients at the end of the day.
0:32:52
Paul: Okay, so you guys have been doing this for a while now. Can you tell me a little bit about the greatest challenges that you face and how you approach them or overcome them?
0:33:00
Daniel: Yeah, I think... Do you mind if I field this one, Keith?
0:33:03
Keith: Go ahead. Well, I think we'll both be talking about this one.
0:33:06
Daniel: Okay. We both get to be heard on this one. I think that... Here's the truth, and I think this is true in all industries, but it's super true in the food industry. You can develop technologies faster than you can deploy them because of the entrenchment of "existing standards".
0:33:25
Daniel: And so we quickly realized after rolling out this, the BRS system, and then the contextualized approach, "We can do this. And we can do some pretty amazing stuff." And our development roadmap has some pretty sophisticated things in it that we've already experimented with. For example, once you have a recipe partitioned in the way we've set it up, you know which steps are operator-driven and which ones are machine-driven, you can immediately carve out the relative effect of the operators to the relative effect of the equipment. You also can compare similar categorical steps across the different formulations, you also can compare steps across multiple different sites, but it's really not uncommon to get a phone call to say, "The same piece of equipment, how come I'm getting a different result in Facility A versus Facility B?" Well, the contextual data, if they're running on the same framework, can provide that.
0:34:24
Daniel: I think that there's a confusion that folks have about legacy branded terminology to say, "Hey, I always use brand X components, or I always use brand Y components... " and believe that that's driving standardization in some meaningful way, which it isn't. And so our focus, really as an organization, is making the technology accessible and deployable to make sure that we get this IoT empowered cooking technology into the hands of people as a priority, rather than trying to make lots of new bells and whistles, which we've already began to make them, but getting it in use today will already be so impactful. That's the challenge, getting over that legacy hurdle. Keith, you wanna elaborate on that?
0:35:19
Keith: Yeah, there's... Mine are two issues: One, the ongoing issue is the adoption curve on new technology, of course, we have to go through the years that it takes to get people to adopt a technology, but you run the fear... You're gonna run into people selling fear all the time. And the fear is, "Well, we can't let you connect to our system from the outside." So the old fashioned approach is that, "We're gonna throw a wall around us and we're just not gonna let you connect." Well, if you do that, you get a walled garden approach, and you're gonna only get certain capabilities in that. And if you're a big food company, one of the big ones, they can afford to do that, because they can build all their tools in-house, and they can say, "We're not gonna let anything get outside.” Well, really, they can't, because the tools they built are pretty subpar, they just don't have the insight into the equipment to know what is important, so they build these subpar analytical systems that are just analyzing a trend and they don't really learn anything from it, and so they don't want to share.
0:36:44
Keith: And they believe that they have it under control, so we never get anywhere, and I've sort of said, "They'll catch up eventually." I'm targeting more the mid-size and smaller-sized companies who don't have that horsepower in-house, who can benefit from what we're doing, but they have to come to grips with, "Yes, this is a new world. We're in the world of IoT now, and you have to accept that we're in the world of IoT, and if you can't, well, we're gonna have an issue." And it takes some time. I have spent a lot of time talking to IT people, but actually, they're not too bad actually. When you get to talking to IT people, they go, "Yeah, we can do that." It's more a problem when you have this entrenched mindset, actually, at the OT level, who says, "We're not gonna let this happen." That usually is where it gets killed, and if I can convince the IT people, they convince the OT people, then we can make things happen.
0:37:42
Keith: That is probably the first big challenge and the second big challenge is, of course, Dan was alluding to it in a way, and that is availability of parts. Parts right now, i.e. the supply-chain disruptions, makes it difficult, in what PLC you can get and what DFD you can get, this is not an issue for Ignition, but it's certainly an issue for us from the hardware point of view, just getting the stuff we need.
0:38:14
Paul: Yeah. Kind of touching to the point you were bringing up there, Keith... I mean, a lot of the times when you're updating, like a customer's... Whatever, giving them a new solution or what have you, sometimes you do have to bring in new tech. Can you talk about that? Are there any hidden costs or topics, or points you wanna bring up in regards to digital transformation?
0:38:35
Keith: Well, it's a point of consternation, let me put it this way. It's a point of consternation for my software developers. Because I am wanting to push our existing customers forward, and we're actually moving the platform, and of course, that's rather frustrating. They would like to leave it alone, but if I leave them alone, then I'm not really moving this whole platform forward for Blentech. Blentech needs to be... Well, to Dan's point, we need to be ahead, but we can't be too far ahead. But at the same time, for instance, a lot of our stuff was done in Vision on the HMI side of the business, and there's an opportunity now with this new login capability in 8.1.6, that allows us to do things that I think that we really should do.
0:39:24
Keith: So at some point we have to have a discussion, "Do we move forward? And when we move forward, do we bring the existing customers forward with us?" and that's an expense. There's certainly an expense to it, and the expense is, not so much hardware, it's mind-ware, the ability to focus on this and bring these people all forward in a very controlled manner, because you can't bring them all forward on day one. It's not like we roll on an Android update and we can throw this out to 50 customers in a day, you have to roll it forward one at a time, and that takes time, but in the end, the customers end up being benefited by this constantly improving platform. And of course, they end up liking it, but there's a cost to that. So that's part of the balance.
0:40:19
Daniel: I think the analogy that I have with respect to the digital transformation is, yes, there are hidden costs associated with it, but there's also an approach that you can take to digitally transform a legacy enterprise, and you really don't need to look any further than kind of what folks do when they own an older home and they wanna start using some of the smart technologies that are used in their home. You don't have to rip all the wires out of the wall to automate a couple of lights. There's a variety of ways of doing it. You can change out a couple of plugs and get a controller hub, and do these things. There's some resistance in this "all or nothing mentality." which really isn't appropriate. You can point by point attack this. And on the flip side of things, I think folks tend to approach digital transformation saying, "Hey look, I've already got the hardware. What I'll just do is I'll connect it, look at the data, and I'll slap a machine learning algorithm on it or something, and all of a sudden, insights." And I don't know how many times I've been on a call with somebody who's done that, and they put data, that flawed data into it, and produced flawed outputs because they didn't use context to extract non-meaningful data.
0:41:44
Daniel: If it's possible, I think the visual display of our system and our architecture would provide the perspective here in this forum to show exactly how that can be done with our framework. But also to go back to Keith's point about putting the walls up around organizations, you go that direction, and although it's possible to accomplish that on your own, you limit the ability to leverage the learnings of equipment suppliers, and you end up with those start-up timelines that can take longer, because you're having to go through the learning curve that others learned. And that's the price of isolation, I suppose. But again, it's through the demonstration of our technology that you really get it, and so if it's at all possible, it would be pretty great to share that.
0:42:45
Paul: Well, guys, you've been doing great answering my questions, maybe I kinda want to open the floor a little bit. Is there anything you wanted to share with our audience?
0:42:52
Keith: I was thinking about what Dan was talking about, the machine learning algorithm and also a few other things, and one of the things that comes out of this type of data is the ability to do digital twins. We know with all this data that we're forming, and digital twins comes, it's a hot term that's been around for a few years now, it really is a real thing, because we now have all this data on our customer's application, we can build them a digital twin that will predict in advance how long a recipe should take to run. So if they build the recipe, we should be able to come back to them. I haven't done it yet, but we can do it. This is one of those things that Dan talked about. I don't wanna get too far ahead, but if someone out there is saying, "Well, I'd really like to know how long this recipe is going to take to do." Just start using the system, and it should be able to predict for you how long this recipe will take to produce.
0:43:47
Daniel: Well, and it needed the contextualized steps in order to make that feasible. So it has to be built in a way where essentially the activities are unitized so that they can be summed relative to the variability or productivity that each provides on its own. The correlation of what this would look like is, when you build the recipe on our cloud platform, it would predict the time to complete it. We have all the math worked out to do this, it's all in place, and it would look a little like... I grab my phone, I set here, say I wanna drive to SFO, and then it says if I go down Golden Gate it's an hour an hour and 20, but I gotta pay a toll, maybe I go around the other way and it's two hours, but I don't have to pay the toll and I can click it and see the difference. So you could reconfigure your recipes before using them and predict the performance based on the legacy performance of your equipment, either the theoretical capability of the equipment, which would be like, without traffic or with traffic, which is the empirical performance of your team. Both of those are feasible, and then the delta between those says a lot about your capability as an organization and tells you what the growth potential of your company is.
0:45:10
Keith: And another thing that I think is important is, we talk about this AutoChef 2 on our Blentech equipment, but what we're realizing is that there's a huge opportunity here. This is good, really well-developed software, and it could be used well, not only on retrofitting some of our old AutoChef 1 systems, which I would love to do, but I think the beta application is, there are other people who are running recipes out there who are running with antiquated recipe systems, or don't even know that they need recipes. And we can apply this technology to that. Our competitors are out there, that we could probably automate the heck out of them and they would be... We don't wanna help them solve their kettles, but then again, we do wanna help our customers, and so we're about helping our customers improve capacity. So, we're talking to them about these things.
0:46:05
Paul: Well, hey, we've been talking about what you guys do, should we take a look at an example of what you do?
0:46:10
Daniel: That would be great.
0:46:11
Keith: Certainly, that would be great. The first tool I wanna talk about is called the batch comparison tool. And this, once again, is up in ARTIS, so when a customer is using ARTIS, they have a lot of tools available to them, and as Dan was mentioning, we stopped developing tools for a while just because we were getting too far ahead, but I just wanna focus on a couple of them today, at least the batch comparison tool and the step analysis tool.
0:46:38
Keith: So when a customer is running a recipe, we're obviously collecting data, that contextualized data on all the steps that they're running, and in a given week they may run hundreds of recipes, or just a few recipes, or a blend of recipes. Lots of customers run a recipe all day long, and then the next day they run a different recipe. This is some data from some older runs that we did, it's not our customer's data, we really really protect our customer's data. We don't even look at our customer's data. So this is our data from back in 2019, just to show some of this technology. What this does is, when you say, "I wanna analyze the spicy cheese sauce that we made in the month of May in 2019,” what I do is I selected the data, it shows me all the batches that we made, and it tells me how long every batch took, 85 minutes, 85 minutes, 104 minutes.
0:47:41
Keith: Take your pick, you can go through and look at all of these batches, but you can see there's large variations. They go from as short as 84 minutes to as long as 117 minutes. And by the way, the colors represent standard deviation, so if you're more than one standard deviation long, the bar is red, and if you're more than one standard deviation short, the bar is magenta. Now, both can be problems. I've seen a lot of the magenta bars are incomplete batches that just didn't get finished, and a lot of the red ones are ones that are, just run slow for some reason or another, but what we wanna do is find out about why. So we can focus in on these batches, and I'm gonna focus in on this one for an example, and click on it, and what you can see is how long every step took, and you can... Up above, you can see what the recipe name was, the lot ID, the batch ID, the cooker it was done on, who was involved. All that information is available for you to see. Now, that's just like looking at a trend, but now this is where it changes. A trend only allows you to look at one batch at a time, what we needed to be able to do is look at multiple batches at a time, so what we did is you create a golden batch, and what I just did is I copied this one over and said, "This is the one I wanna compare all the other batches to."
0:49:00
Keith: So now I wanna pick out one that's sort of... Let's say one that didn't go so well. It's red here. So let's take this one. So these are the two batches now being compared to each other side by side. And you can see right away there are some differences. Obviously, the mixed discharge step here took about 42 minutes, whereas in the golden batch, it only took about 33 minutes. The other one was the bring temperature to 160, took 9.1 minutes here, and here it took 3.6 minutes. Big differences. Every one of these things is important. The liquefier water temperature, this is an operator step, and it took that person 10.7 minutes to do it and it took that person one minute to do it here, huge variation.
0:49:49
Keith: This type of information is the stuff that we use during those first two months after start-up to help the customer go from that 50% capacity to 100% capacity, because we can zone in on this and identify exactly where they're having troubles in their various steps. This is a supervisor management tool that allows the supervisor to guide the user. Now, in this case, both the operators were B Reynolds, but very often you're having maybe four machines and you've got different operators running the machine, the supervisor can say, "Listen, you're having troubles with your batches, and we can see that it revolves around this step," and then they can coach them on that.
0:50:33
Keith: So then you can... We've taken this concept even further, and this is a tool that's used in two different ways, and I wanna focus on both of them, and this is called a step analysis tool. Once again, I wanna go back to the May of 2019 data. And I'm gonna take a look at the same recipe we did just before, which is the spicy cheese sauce. Now, here's a little different view of that data. This is looking at this data from a statistical point of view. We've analyzed all the batches, all those 35 batches by step, and then put up their mean, how long the average step time was for each step, plus we add a color and a size. So the color tells you if you have a large amount of variance in the process. So, coefficient of variance is the standard deviation divided by the mean, so if you've got a big COV going, we make the dot red. So even though this dot here is smaller than this one over here, this one has a coefficient of variance of 0.16, this one has a 0.43. That's why this one's red and this one's magenta. Because, of course, the standard deviation is going to get bigger the longer the step takes, but what's really important is how long it takes relative to the average.
0:52:00
Keith: So we can... This tool allows the user, which is typically your supervisor, or your quality control person, to click on this, like this number, step number five, and we can see all the variation in step time. And you can see there's a huge variation in the step time. From as little as what? This one over here was one point, so two minutes to as much as this gray bar, which is 14 minutes. Now, what the gray bars are, by the way is, and if you do anything with statistical analysis, you have to throw out your outliers, or they throw your data off completely, so we have an outlier filter throwing those off, so I'm not even paying attention to those when I'm analyzing the data. But this tool tells you that you have an out-of-control step, and this is really now focusing on, "Why am I having this type of variation in a step?"
0:52:50
Keith: And if... So think about it. What we just said is that, over here, we said that the average step time was for this is four and a half minutes, but we know we can do this step in two minutes, so why aren't we doing all the step... The step always in two minutes? And so this is a training tool, you just start to focus on the operator, we need to get this step done faster, and you can do this with every step. Now, the next advancement for this is, well, we do want people looking at this data, but once they look at this data, they're gonna say to us, "Well, how do we get our operators to do all their steps in the minimal amount of time?"
0:53:28
Keith: And then we... So have a tool for that, that we turn on their AutoChef 2 system, and then when you turn on that tool on the AutoChef 2 system, it starts to collect data while they're running a batch, so as they're running this step five. Well we know that the average step time is 4.4, 4.5 minutes, when it gets to basically, standard deviation plus the average step time plus a factor, because statistics requires it. Anyhow, if you start to do that, you turn this tool on, it will then notify the operator when they're falling behind on a step, and it will notify the supervisor on a different time scale. In other words, the operator will just get a flashing green light saying, "Hey, this step's running a little bit low, a little bit slow," but the supervisor will get basically a text message saying, "Operator on this machine is having difficulty with this step, you might wanna go check on him." So what you can do is you don't have to be there all the time, you only have to be there when there's a need.
0:54:40
Keith: And I wanted to diverge a little bit from this. What we run into, and I think everybody in all industries are running into it now, are the ability to hire trained operators. We can't get trained operators anymore. We get people who are untrained, but we have supervisors who are trained, so what we're trying to do is use this tool to allow an untrained operator to run a batch, and to notify the supervisor when the batch is going out of spec, when the production time... Because when the production time goes out of spec, typically the batch is going out of spec, so those $10,000 batches Dan was talking about earlier, are easily recoverable or not recoverable. We keep them from occurring, we keep those $10,000 losses from occurring because we catch the batch going out of specification before it goes out of specification, and this tool is extremely, extremely valuable for that.
0:55:34
Daniel: Yeah, and when you're talking $10,000 per batch, even the difference between the mean and the outlier condition, which although we filtered it out for this analysis, it very much occurs, but the difference between those... Probably somewhere between $1,500 and $3,000 per incident. And you can see the frequency of those incidents is occurring right there, and that's just on this step. So these type of expenses, these types of losses are, frankly speaking, inherent in the processes that we've taken them for granted as an industry. And with this technology they can be systematically eradicated.
0:56:19
Keith: That alarming the supervisor tool is a really big deal, that one is, I think a game changer.
0:56:29
Paul: You sort of figured out how to let people monitor things a bit more without actually having to be there, right. That's always... That's absolutely an important aspect to this tool, so. This is fantastic, this is a great tool. I've seen a lot of, over the years, a lot of different custom implementations of recipe systems, or analysis tools, and a lot of the times they're very much bespoke tools, but it seems like you guys went out of your way to try to make this something that is portable. As we were talking about a little bit earlier, this is something you can kind of drop into different... Or provide to different customers, and then it can provide a lot of value to them. It looks great so far.
0:57:04
Daniel: Yeah, I think what we realized in through 20 years of taking phone calls and having to solve those problems by getting on airplanes is, there is a pattern. The analysis that you do to resolve these issues is the same, and if that pattern exists, it should be automated, because that's how you can create that value. So you can serve up that information in a way that people can make those decisions very quickly. And you'll notice we've taken a lot of steps, as Keith has shown this here, to make the accessibility of the analytics tools. You do not need to be a statistician to do statistical optimization of your performance. Basic filters applied with radar buttons. Basic comparison tools allow anyone to be able to step in and understand variance, and increase the profitability of their organization accordingly.
0:58:07
Keith: And we realize that, still, our software is pretty powerful, and actually part of what our whole service includes is that we coach our users on how to use these data. Every month we're on the phone with them saying, "Hey, let's go through another training, let's talk about it," we do a lot of manual writing, a lot of videos that we make to help our people, our customers understand what we're doing.
0:58:33
Keith: I threw this trend up here because everybody is used to this, everybody sees trends like this, and these are nice, and we make them available to our customers to see, but what you see when you look at a trend is only a linear look in time. And a linear look in time makes it really hard to compare one batch to another batch, that was why we did the batch comparison tool taking us back to that, what I showed earlier. You don't... Can't see enough with this, and we recognize this. Dan has been... Dan is a food scientist, we've both been in this particular industry for over 20 years, we know that you can't just look at a trend to learn anything. You've got to learn a lot more than that. You need to compare batches side by side. Yes, HAACP data, which is important, like on temperature control is captured very nicely by this trend, and that's valuable. But the real difference is, why is there a quality difference between your batches? And that's what we start to get out of the other tools.
0:59:33
Daniel: But one of the things that most food production around the world needs is record-keeping. Confirmation of the conditions that were performed, and there's a way to level that up as well, and that's something that we've done, is included standard in the ARTIS and intelligence system, the ARTIS system. That, upon completion of a batch, the batch record is automatically rendered and emailed in a PDF format to the person of the customer's choosing, or the location of the customer's choosing. An integrated app that has been created also allows the input of information, whether it's barcode scanning or photographs, or other forms of data, then it's actually timestamped and included both on that report as well as available for subsequent advanced analysis, which is something that we're already working on.
1:00:35
Keith: So I want to go over this batch comparison report, or what I call a batch completion report, I should have said that correctly. This is mac and cheese, we talk a lot about mac and cheese because it's a very popular product in the United States, this is a mac and cheese batch being made. And what happens is, when AutoChef 2 is running, it's following every step and transmitting this information up to ARTIS. And at the end of the batch, ARTIS recognizes the batch is complete and issues a batch report. Uses the reporting feature in Ignition, generates a PDF and sends it to the customer to their copy list. So they have a role manager and they can set up who's supposed to get what, and so this is one of the things that you can report to various people.
1:01:22
Keith: And it shows you the steps that the batch went through, a few critical things like in this one, you can see when we're doing direct steam injection, you can see the temperature and valves opening, particularly the discharge door opening and closing. On the right, you can see how long every step took. So this gives you just a snapshot of the process, and you can use it to quickly look at the process. If you don't have that access to the ARTIS portal to look at data, you can just grab this PDF and say, "Does this look right?" And it might alert somebody to something off, and save them from making a mistake. By sending it out. I'd say, "Oh no, this is wrong. This batch wasn't right. And there's something we need to do about it."
1:02:05
Keith: Also in this batch report is, we collect all the weight that is going through the system. So you can see how they're doing ingredient additions and the weight is slowly building up. And then you can see the discharge and where they end up with 36 pounds of material left in the cooker at the end of the batch. Lastly, I wanna scroll back up. You can see where Dan was talking about ARTIS assist. These black lines on here, this is when ARTIS assist was being used to capture an image. If we scroll down, we can actually see the images that were captured during this batch, and you can actually have comments on it, so if an operator wanted to capture maybe some ingredient that didn't look right, they could take a picture, and then that would be recorded into the batch record forever. It goes right into the database and it's collected with it, so that we have that visual indication of anything that's going on. And on top of that by the way, we can use this ARTIS assist app to scan barcodes, so we can also take a barcode of an ingredient so that we get exactly...
1:03:17
Keith: Exactly see what's going on lot by lot. In other words, when a user scans a barcode, that's built right into the database, the information on that barcode, it is recorded so that it is forever in the database, so then now you can access that information when you're going back to track quality, you can see exactly what lot of what material was used. So this ARTIS assist tool is pretty nice, and it wasn't possible until we incorporated Perspective.
1:03:47
Daniel: And it also opens up the ability to do advanced contextualized image analysis. If images are taken in a controlled manner, there's a huge potential of image analysis with that data, provided the data is there, you know the context, you know how that image was taken under what circumstances, you can do some pretty powerful things with it. That's some of the things we're experimenting with as well. A lot of companies appreciate this batch reporting to drive their day-to-day production as well as confirmation of compliance with their HAACP programs, and the systems can be configured with a confirmation of temperature probe calibration for HAACP compliance and governmental compliance. This particular report does not show that, but when that feature is enabled, it's on there.
1:04:40
Keith: As Dan mentioned earlier, we keep on advancing so fast that if I had a more up-to-date one, I would probably be showing those new features that are on there. And if you come back and take a... Look at a batch report in three months from now, it might even be more advanced than it is.
1:04:56
Daniel: Or just come visit us at the Blentech Innovation Center.
1:05:00
Keith: That's a good idea. There are so many things that you can show that... We can go on forever and ever, but if somebody wants to talk about it more, then they should probably just talk to us directly.
1:05:11
Paul: All right, well, I guess we'll wrap it up here. Dan and Keith, thank you very much for joining us today and sharing a bit of what you guys have been working on here. Fantastic. This seems like a great solution that... So thanks again for coming on here.
1:05:23
Daniel: Thank you.
1:05:24
Keith: Thank you for having us
Want to stay up-to-date with us?
Sign up for our weekly News Feed.