CloudUp Ep04: CI/CD as a trend in DevOps

CD becomes more effective depending on how we design our infrastructure to deploy apps. In this episode of CloudUp, we provide insights on how to successfully get started on continuous deployment and when it comes to CI/CD, what needs to be in place for a smooth transition.

Meet the Speakers

Han Kim

Principal Architect

Jeremy Pries

Director of Cloud Infrastructure

Transcript

Jeremy

CI is a little easier to picture, you know, as we transition into CD and try to deploy, you know, the name of the game in the CICD space is try to iterate faster; try to release things faster, try to add customer value faster. So, CD’s a little bit harder, would you agree?

Han

I mean, extremely. I think the bar between CI and CD is extremely high.

Jeremy

Yeah.

Jeremy

Yeah, for sure, for sure. So, CD becomes more effective depending on how we design our infrastructure to deploy our app. Today we’re gonna talk about CICD as one of the top trends in DevOps. So I know you’ve done some projects in the CI space, and, like, what kinda stuff are you doing now that we’re not tied to physical machines anymore at all, even if they’re VMs? Like, we don’t buy a set of hardware anymore. We have kinda, like, this limitless, seemingly, data center. Like, what?

Han

Well, I think that, historically, was like, you have a giant build server, and if we’re talking about things that require build, like Android or Java, or things that require time to go through a process whereby there’s maybe, possibly, even automated testing or some testing as part of that, we’re looking at how we change the mindset from we have fixed amount of compute that we can leverage all the time, ’cause we’ve prepaid for it, ’cause it’s on-prem, it’s hardware, versus the ephemeral compute, which means that we can take massive machines for brief amounts of time to do things faster, right?

So, the key here is, like, on an equivalent level, if you say, “Okay, the processing of the on-prem machine has so much capacity and speed. The equivalent in the Cloud might have, you know, a similar speed or capacity. The difference is when things start to scale. So, projects that I’m working on require ability to keep up with a dynamically growing set of requirements that require more and more developers to come into play and build these kind of rather large artifacts, and do that in a way that they can do it any time on demand, and so we spend up giant build servers that last for minutes instead of, like, an hour, two hours of continuous running to build these artifacts and then disappear.

And I think part of the CICD methodology that we use that works the best is to make sure that we take everything that doesn’t require consistent and constant compute, like you’re talking about, and make those into larger, ephemeral machines, so that we can leverage the speed change without necessarily paying for something that’s lying there idle most of the time, or even some of the time.

Jeremy

Oh, gotcha, so, we have a pipeline that fires off of build, right, instead of having numerous people using the same build server, like a pipeline points to a specific build server? You’re saying, like, make a bunch of copies of that?

Han

Yeah, so build up a new one, let it do its thing, and then die. And everyone has their own, so if you have 500 developers and they’re all tryin’ to use one or two or three on-prem boxes, like, it gets inefficient really fast. There’s no ability to scale that really easily.

Jeremy

Yeah, and I know you’re able to size the VMs then, a little different than if you were, say, running VMware upfront.

Han

Yeah, for sure, right.

Jeremy

Right? I think we might allocate a bunch more VCPUs and try to accelerate that build process.

Han

Yeah, well, we have almost, I could say infinite, but a great deal more overhead, you know, in terms of what we can size around, versus what we have on prem, which is contained by the machine that the VM is running on.

Jeremy

Yeah.

Han

Yeah.

Jeremy

Yeah, so this is, like, really cool for long-running builds, right? Or what were long-running builds, at least, if something took a couple hours.

Han

Yeah, or compute-heavy unit requires a lot of processing. Like, image and video processing is a good example, like, doing that on a single machine takes forever as all production people know who do video, 4K, 8K video, but if you put them off to ephemeral machines and let them do it synchronously, they can make builds and start doing processing outside of your work environment or your work time, right?

Jeremy

Yeah.

Han

Which makes it much more efficient for people, I think.

Han

Yeah, for sure. And per-second billing, does that matter in this space? It sounds like it’d be an advantage.

Han

I think it’s huge because if we look at, like, the on-prem world, we have to forecast when we do our leases for hardware, in advance what we think demand will be. So, like, HVAC, for instance. You have to kinda plan for worst-case scenario, and how do you do that in an environment where market demand, business demand, change, especially when you look at 3-year leases, or multi-year leases, right? In this case, for the per-second billing, we’re not, kinda, burdened by the inefficiency of pre-purchasing a huge amount of things that we may or may not use, or we might saturate all the way and then we’re left in a difficult situation ’cause we don’t have enough compute resources. So we only use what we need at the time, and I think architect the infrastructure as code, application as code model. You know, it’s way more efficient.

Jeremy

Yeah, for sure. So even a build that took a few minutes, if we have per-second billing, could save a bit of money, right, by paying on the second instead of rounding up to the next minute?

Han

But at any scale, for sure. I think that’s definitely the way to go.

Jeremy

Yeah, yeah. And so global development teams can take advantage of this kind of thing too, right?

Han

Oh, especially, because, you know, depending on which call provider you are on, like, you can stand up things anywhere in the world, in multiple regions, and that cost of having that multi-region deployment of all your builds and your code repositories, then leveraging those areas, these build servers that come and go. So, you know, the cost might increase a little bit because of the regions that you’re in, but then, again, the efficiency is so high because you pay per use that you can change or exchange speed of deployment and change for the cost, in essence.

Jeremy

Yeah. So, CD’s a little bit harder, would you agree?

Han

I mean, it’s extremely. I think the bar between CI and CD is extremely high.

Jeremy

Yeah, for sure, for sure. So, CD becomes more effective depending on how we design our infrastructure to deploy our app.

Han

Yeah, well, I don’t think you can do it any other way, really, because, like, in the past, think about doing continuous deployment on the on-prem environment. Like, how would you even really go about doing that? Like, you would have to pre-prep a situation that is wildly complicated, you know?

Jeremy

Yeah, yeah.

Han

Nowadays with infrastructure as code, not so much, ’cause you can actually stand up the infrastructure as well as applications.

Jeremy

Yeah, yeah. So, our average customer is a Netflix, right?

Han

No, no.

Jeremy

Right? So, like, let’s assume we have some CI in place. How do we get started on a continuous deployment? Like, what’s the easiest spot to start from?

Han

Oh, man, that’s a tough one, because, you know, to get to CD, or even to CI, DevOps in general, you know, it’s the whole technology, you know, processing people. We have to kind of make it a mindset, an organizational mindset. But let’s say we’re saying, “Okay, we’re already deploying, you know, now and then, and we’re making updates now and then. Now we wanna allow developers to respond as quickly as possible.” I think we have to look at, okay, we’re not deploying to the same machine , you know, we’re not doing the old-school way of replace what’s there with this new code and test it because there are lots of problems with it. There’s errors, or issues, or multiple people are making changes to the code base and you’re not really aware of what other teams are doing, especially multi-national teams, et cetera. I think the better way is to actually stand up another ephemeral infrastructure, like replica, and deploy to it and do traffic-shaping from network as code piece, where we can say, “Let’s put like five or 10% of the people to the new code. Let’s just see if it’s working in the wild. Let’s see if we can handle the scale.” If not, we can roll it back, and if it does work, we can change the split from 90/10 to 100 for the new deployed code and take down the old one.

Jeremy

Oh yeah, cool, so infrastructure as code comes back into play, right? I mean, we already wrote code, and to deploy that whole infrastructure, so you could replicate for every release.

Han

Yep.

Jeremy

Wow, ultimately everything.

Han

The whole environment. Everything that supports the environment.

Jeremy

Yeah. So I think, well, one thing I’ve talked to customers about is that it’s kind of ongoing. It’s like a continuous improvement process.

Han

Yeah, for sure.

Jeremy

Right? So you start with CI. You know, basic CI in place is something that most development shops already doing, right? But then, add on a little bit of automation at a time, and eventually, ten years from now, you might be a Netflix, or maybe the tool sets are more mature–

Han

Yeah.

Jeremy

right now to make it that a little more achievable than when they started however many years ago that was.

Han

Well I’ve seen, I think, you know, the trend is that lots of people try to offload the burden of the automation on the tool set side to, like, a web-hosting CI tool.

Jeremy

Oh, sure, yeah, like CI as a service kinda tool?

Han

Exactly, so that’s kinda been the new thing, like, I know, whole systemic earth, and Google, and other things are on the rise now, but I feel like the tool, in and of itself, is never gonna be enough. Like, there has to be a core, fundamental organizational mindset to be able to support this type of thing, and that’s why those take iteration and time, because the change-management of each component piece leading from CI to CD needs to be in play before it actually can happen in a way that’s not a disaster, you know?

Jeremy

Yeah, yeah, cool.

Jeremy

Thanks for watchin’ this episode of CloudUp!

Han

Leave your comments and questions below and win some Agosto swag.

Jeremy

Thanks, and see you next time!

CloudUp Ep03: Infrastructure as code in DevOps

A common myth in the DevOps space is a lot of customers viewing infrastructure as code as something that’s only designed for small startups who are just getting their business started; however, what many don’t realize is the power and effect it has in the enterprise. On this episode of CloudUp, we’ll go through how to successfully manage your infrastructure as code and the benefits it can bring to your organization.

Meet the Speakers

Han Kim

Principal Architect

Jeremy Pries

Director of Cloud Infrastructure

Transcript

Han

I was just at a Google PSO event in New York and they have this new diagram which is a subway map so it starts with the main trunk and then it splits off into network, application, and policy but it begins with infrastructure because without that, nothing else can actually have a life right? So, the idea of not using infrastructure as code seems old school, maybe that’s the right way to put it.

Jeremy

It does, yeah, it does. Today we’re talking about infrastructure as code as a trend in DevOps. So, with infrastructure as code, a lot of customers view it as something that’s really designed for small startups who are just getting going and they don’t really see it as something that works in the enterprise. Maybe it has this sort of superhero complex that goes with it where an individual is able to develop the infrastructure as code from the ground up all the way through the app stack and it doesn’t really work in a team approach. So, they view it maybe more as a deployment script methodology.

Han

Versus what you’re thinking about you’re saying or with how you would frame it?

Jeremy

Right, versus using it as a way to actually manage your infrastructure going forward. So, instead of just deploying a particular environment and then managing it with some other config-management techniques, live the lifestyle of infrastructure as code which means every single change, you’re gonna go back to the code repo and go back to the code lifestyle and deploy it.

Han

From the infrastructure network side and all that you’re saying, like managing that in a code repo as if it were a exactly that code.

Jeremy

Yeah, for sure, for sure.

Han

So, helping to facilitate sort of like the separation of duties, who has control over each aspect of that stack really you know? Not having to superhero controlling all of it.

Jeremy

Yeah, instead of having the super hero, we have different responsibilities within the environment as well and we have more than one person on a particular team.

Han

That makes sense.

Jeremy

So, take a network person, for example, they may manage just the network portion of the environment and the rest of the team contributes at other levels.

Han

Makes sense. I was just at a Google PSO event in New York and they have this new diagram which is the subway map so it starts with the main trunk and then it splits off into network, application, and policy but it begins with infrastructure because without that, nothing else can actually have a life right? So, the idea of not using infrastructure as code seems old school, maybe that’s the right way to put it.

Jeremy

It does, yeah, it does. I think implementing infrastructure as code is a little bit of an investment upfront, the more you do it, the easier that becomes and we all have our habits to go back to you. You know, pressing buttons or maybe running command line utilities to manage an environment and it’s sort of in an ad hoc manner and not very controlled. And the other benefit we get out of infrastructure as code is we can rebuild the environment as we need to. It could be part of our DR plan, could be a part of our duplication plan like if we need numerous dev environments.

Han

For sure

Jeremy

Or, lower tiers to go along with production.

Han

I think, like, you and I differ slightly on like how different ways we think about implementing infrastructure as code cause I think you think of things in a very holistic manner, and this is kind of new, the policy as code wraps around it so you have a little more free form in terms of ability for developers to stand up their resources and I like more front end where we control lets say a self service or ticketed UI type of you know? So, we control it on the front end rather than the back end you know?

Jeremy

Yeah for sure, for sure, yeah, I mean, I think that’s maybe a next phase for infrastructure as codes so we have legacy IT that is struggling to understand how they work in a Cloud environment and so if they think in terms of policies, eventually the software stacks will get to the point when we can allow self service throughout the organization and our central IT controls is able to control the policies. It says what’s allowed and what isn’t allowed and then the users can commit code to the pipeline if it’s allowed, cool, goes through, everything fine, but if we find something like lets just say we have a policy that says no bucket should be open to the outside. Like, we could have our pipeline deny and reject that change, push it back if it violates a policy.

Han

And see my way of thinking is let’s just not let them ever do that up front you know?

Jeremy

Right, right, absolutely, and there’s a couple different ways to implement that right? We know the policy as code software is getting better. It’s kind of emerging.

Han

Yeah, I think that all the Cloud providers now are starting to dive into that CICD, cloud controls because I think that we’re seeing that the initial move to the Cloud seemed easy, but then the management and the operations were more and more difficult with controlling costs, controlling access to resources you know? It gets out of control really quickly if it’s not set up properly in the beginning.

Jeremy

Yeah, yeah, for sure. Access control can be all over the place so that’s actually the easiest spot to get started with infrastructure as code is simply provisioning things like projects and IM roles and managing who has access to what. It’s very easy to manage now with something like Terraform and live the lifestyle, manage it every day. You need to give someone access to something, go ahead and add in new code, commit that, and it’ll push right into your environment. It’s an easy way to get started.

Han

Do you think that besides Terraform, what other tool sets have we been seeing that customers kinda migrate to? Cause I think the challenge is old school IT on prem as they move into cloud, have a difficult time releasing the tools, methodology, and processes of like managing the pet based approach to infrastructure into this new kinda ephemeral, scalable, open ended kind of infrastructure universe. What other tool sets do you see that would easily port over from on prem into lets say the Cloud world versus the ones that don’t really work as well?

Jeremy

Yeah, I mean, good question. Terraform’s definitely the strongest tool set in terms of infrastructure as code and we see a lot of skill sets out there in config management products like say, Ansible for example. There’s no reason you couldn’t bring those tool sets into Cloud and maybe even have a mix right? Like everything isn’t ephemeral in Cloud right?

Han

For sure.

Jeremy

You have stuff like databases and other things that just aren’t gonna go away. We need a pre-exist something along the way and maybe those are good tools sets to mix in. We’ve also found that cross training to learn, if you know Ansible, the config language is all totally different but the mindset isn’t that far off.

Han

Yeah, right, well, I think as long as it’s more declarative and allows us to kinda track state, I think a lot of tool sets will fit the bill and there’s plenty coming up now that are web based I think as well that will lend to that same model, declarative model of infrastructure network and policy.

Jeremy

You know, there’s numerous different dev ops roles and I think understanding micro services architecture is a really important role to have so, if you have performance problems are a great example where it’s tough to diagnose what’s going on without understanding the apps and how they talk to each other and what their dependencies are, that’s an example of definitely where you need quite a bit of dev skills to be able to troubleshoot that.

Han

Where do you think server lists and things that are kinda moving a little more into the future, where do you think that plays in in terms of infrastructure as code or in infrastructure kind of concepts?

Jeremy

Sure, yeah, I mean it kind of is, we need to use infrastructure as code to set up the plumbing in order for the pipelines to be deployed right? So, we don’t manage as much stuff with infrastructure as code, but it still needs to exist in order for things to work so it still is relevant in that space for sure.

Han

Well, do you think that like in terms of a policy as code, or maybe even infrastructure in general as code lets organizations control like costs more so? Cause I know the primary or chief complaint is that it’s easy to look through shift VM’s over but then there’s no inherent process or model to manage or see or it be visible to how much things will cost. How do we manage that better with infrastructure as code?

Jeremy

Yeah, I mean good question. So, we know very well what we’re provisioning with infrastructure as code. Some wild cards there like Egress for example, wouldn’t be really controlled with infrastructure as code but we’re able to see that in the config right? We know HashiCorp just released the capability and enterprise to give you a price and to set a policy based on that price so we could maybe follow a separate work flow if something costs more than it’s allowed. So, we can set some policies around what they cost. Totally emerging space there, I mean, in general infrastructure as code helps us to understand what we’re deploying so that we could maybe make a quick calculator around what it costs.

Han

So, like, we’re starting to see how policy and how you see infrastructure as code can then distill down to the management modality or the business level management of Cloud.

Jeremy

Yeah, yeah, agreed.

Jeremy

Thanks for watching this episode of Cloud Up.

Han

Leave your comments and questions below and win some Agosto swag.

Jeremy

Thanks and see you next time.

CloudUp Ep02: Getting Started With Machine Learning For Predictive Maintenance

Predictive maintenance is a growing buzz word in the industry, but how many companies are actually making progress? Some companies are reporting a reduction of equipment downtime by up to 50 percent with predictive maintenance using IoT. The main takeaway is that you can save a lot. On this episode of CloudUp, we’ll be getting past predictive maintenance as a buzzword and get into the what, why, and how your company can make progress.

Meet the Speakers

Rick Erickson

Co-Founder and Chief Cloud Strategist

Mark Brose

VP of Software Engineering

Transcript

Mark

What we’re seeing is there just hasn’t been quite as much pick up of it as we would have expected, so a lot of interest, a lot of value, perceived value, but it’s moving a little bit slowly. I think what we’re seeing, even for single-use cases, you’ve got clients that are seeing hundreds of thousands of dollars that can be saved even with single-use cases so, the take away there is there’s a lot of stored value here in doing something with predictive maintenance, so definitely worth taking a look at and seeing if there’s any value you can add to your company.

Rick

On this episode of Cloudup, we’re gonna be getting beyond predictive maintenance the buzz word and focused on the what, why, and how you can make progress quickly.

Rick

So welcome to Cloudup, the series where we explore the coolest things built on the cloud today brought to you by Agosto. Today’s topic we’re gonna use machine learning to predict maintenance events and there’s a ton to dig into here so let’s jump into it. First off let’s talk about how companies perform maintenance today. Is it proactive, is it reactive, and what’s the impact of that?

Mark

Yeah, we definitely see companies doing both reactive and even proactive maintenance. You know I think the more advanced companies we’re seeing are doing proactive maintenance in the sense that they’re really doing maintenance on a schedule right, so preventative maintenance. Over time you can develop some experience with when you should replace things, but what we see there is that’s not optimized, right? So that’s not, in some cases you’re gonna replace things too early, in some cases you’re gonna wait ’til it fails, so it’s difficult to get it just right. So there’s some cost to that. You do it too early, you’re losing some material usage that you could’ve gotten and at volume and scale, that can be a lot of money and it you wait ’til it fails obviously you might have, if we’re talking tires in a fleet, trucks out of service, maybe has an accident, liability issues, so there’s cost of waiting too long. What we can get to with predictive maintenance using machine learning is you get that a lot tighter. Not saying we’re gonna ever get 100%, but you’re gonna get a lot tighter inside that window so what we’ll be able to save on material, but also prevent more of those failures from happening. That’s to me is the big impact.

Rick

Yeah, so going back to your point about just reacting and not being able to predict the impact of failure, not good, right? Sounds kinda like how I rolled in high school with my Malibu. Let’s talk a bit about how using predictive maintenance can help avoid some of those unexpected costs.

Mark

Again, it’s really being able to optimize that time window. We’re doing that by having a lot more data. We’ve got kind of a wider range of data that we can take advantage of so we can use somewhat static data like making models of a tire, a piece of equipment which tells us something about how it’s constructed, and there’s some predictive value to that, to how long it’s been running, what conditions it’s operating under, to real-time telemetry data like temperature, tire pressure, vibrations, that kind of thing. All those things together then really can be used to build a good model.

Rick

So these are key attributes that ultimately humans can even understand the basis of those key attributes, but I imagine that at scale when you have millions and millions of events, it’s really hard to understand what’s happening, how to use that data and create classified information that fits into categories that we understand.

Mark

Yep.

Rick

I imagine by using machine science and Cloud ML, we can use some of that information to train models so how does that all work?

Mark

Yeah, so that’s a good question. We find still in machine learning there’s a lot of value still to human input and the primary value of that is in this area we call sort of feature engineering. It’s the fancy word for knowing what data elements will be predictive of failure. So it’s still helpful to have domain knowledge to sort of pick those data attributes that should be included, but then what we can do is we can take advantage of the machine learning technology to take that data and create the algorithm for it based on machine intelligence, so it’s not something the human has to spend all this time engineering the algorithm. Their focus is on getting high quality data in place that we can use to be predictive. With the cloud, the cloud really brings to us the platform to run all that on so a lot of time the data scientist that’s working in this space won’t have infrastructure background or development background. So what cloud platform can give us is a lot that just as a service so we don’t need to spend a lot of time or having broader skill sets in that team to build out these models.

Rick

Sweet. So now we’re gonna take a quick break to hear from our sponsor and then we’ll come back and do some jamming on the jam board.

Sarah

More and more we’re seeing organizations wanna be really strategic with their success and part of that means they’re moving to the cloud. At Agosto, we’re seeing a big uptick in clients using Google Cloud platform for their online business operations. So whether you’re thinking you need a boost with AI, machine learning, or you just wanna build something new and fast, Agosto is an award-winning partner who would be happy to help you with your needs. We’ve been Google’s Cloud partner of the year multiple times. We’ve got a few other awards as well and we would love for you to get to know more about us, okay, can’t take anymore, by visiting agosto.com. You can get a free guide to your company’s strategic plan for heading into the cloud. We’d love to see you there. Now back to the show.

Rick

Welcome back to Cloudup. In today’s deep dive session, we’re gonna focus on how you can get started on predictive maintenance and

Mark’s gonna take us through our approach.

Mark

Yeah, so definitely where you usually wanna start is pretty basic just framing your business problem. Essentially that’s just thinking about use cases that make sense for your business. Pretty simple approach, we essentially start out with ID’ing the business area we’re focusing on. An example of that is gonna be, so let’s take the tire example we’ve talked about a little bit. You’re spending too much money on recovering from failures of tires in our fleet. So just framing up that’s an area you wanna focus on and then what we wanna do is dig in a little bit deeper in that and create an actual business question that we’re trying to answer. So in this case, that’ll be something like can we use predictive maintenance with machine learning to better predict failure than we’re doing right now with our preventative maintenance program? What is a more specific thing that we’re trying to target?

Rick

And this is really an impact right? So we’re trying to frame this up as high value, low risk, but a problem that’s going to impact this organization in a material way that’s considered financially viable.

Mark

Yep, but it’s a high-level problem, it’s a framing of the problem in a way that we can target, is it something we can do? You have to think about here you’re typically thinking about well, what does failure mean? Tire case is pretty easy, tire blows up. But with a machine, does it just slow down, does it actually shut down the line, you have to frame what is failure for us, what is the thing we’re trying to detect and some sense of what’s different than what we’re doing today? That leads you to selecting a business metric. Say we’re trying to target tire failure. Business is really our success metric that we’re trying to get to. Say we’re trying to get to, for us right now we’re predicting at about 70%. Can we get to 90? So we have a target. In this first phase, you’re just trying to narrow down the focus of what it is that you’re trying to do.

Rick

And percentages obviously matter when you frame it up like this, but in some cases, if the impact, even if it’s 75 or 80%, so let’s say it’s 5% or 10% better, in some use cases that can be really impactful.

Mark

You can turn that into we’re moving from 80 to 90 and you’d be able to turn that into dollars. That might be for you 50 grand, maybe it’s like hundreds of thousands of dollars depending on what that looks like for you.

Rick

And so the way to think about this is I’ve got humans today that maybe can react to a problem, but when I’m using a system that can handle massive amounts of information and then predict an outcome and I can alert on that prediction. You can also think about sort of downstream as I’m framing the business problem, what’s the ultimate impact if I can go from this sort of narrow use case to something that adds scale?

Mark

Absolutely. You should think about this is a place to start and a lot of times what we’ll do is we’ll look at a whole bunch of use cases. You might start out with maybe 10 or 12 ideas, run this process through with those 10 or 12 ideas and out of that we’ll get to hey, if we can hit these success metrics, these things logically bubble to the top, so we’re looking for hey we found some high value potential here and then what we’d move into is how do we start to test that out and see if we can actually do it.

Rick

Cool. All right.

Mark

So that’s framing where we start and then where we’re going to here is really we’re trying to get to somewhat of a iterative process of learning. We’re gonna frame this up as kinda think about this as a circular pattern here where we’re doing a continuous process and what we’re starting is where all the really needs to be is in data prep. So data prep is really, talked about this a little bit already, but it’s essentially pulling in data from a bunch of different places and doing some data exploration, making sure we can suss out what are predictive variables? So again with the tire example, we’re talking about things like make and model of tires, talking about how long the tires have been running, we’re talking about temperature, tire pressure, all these kinds of things. We’re pulling that together and then this is the place where domain knowledge is important. You build this stuff up, you explore it, you do some visualizations maybe, you’re doing some graphing, you have some thoughts on what’s predictive, but in this phase you’re really spending time determining whether it really looks like it’s gonna be predictive and in this case, you’re sometimes maybe you’re creating some absolute values, you’re looking at median values, or moving averages of data of variables, so this is a part where you’re spending probably the most of your data engineering and science time really is pulling that data together and shaping it in a way that you can use it.

Rick

Are we trying in this phase do we try to understand what a representative amount of data or a relationship means to humans yet, or are we just trying to make sure that the data is in a form that’s sort of repeatable and consistent?

Mark

It’s a little bit of thinking about what you can actually operationalize and then from a model-building perspective, it’s about throwing out things that aren’t useful, shaping things in a way that look predictive, it’s really kind of all of that. You hear terms around data engineering, data normalization, that’s kinda what’s happening in this phase.

Rick

Okay.

Mark

We mentioned a little bit earlier sort of your feature engineering and you’re really pulling together the things that look like they’re gonna be valuable.

Rick

Okay, and throwing those things out that maybe won’t be.

Mark

Exactly.

Rick

So it’s not just getting everything, it’s also trying to be smart about what you are using because we’re trying to again, move relatively quickly to get to some validation of the business problem.

Mark

That’s right. In some cases, you’ll already have this, like in a data warehouse. In some cases, there’s gonna be some work to get the data. This is the foundation really. We have to have good data and lots of it.

Rick

Cool.

Mark

So then from there, we move into what we call model involvement. So here, this is really where the machine learning part kicks in and sometimes there’s straightforward answers to what type of ML approach we’ll use here, but you may experiment with a few different options and kind of see how that susses out. There’s a little bit of data science to this part, but here’s a lot of our letting a machine now use all this data that we got and building models from it. And so we’re doing this typically with a subset of our data as we’re engineering models that look like they’ll be valuable.

Rick

Okay.

Mark

This is the machine part and then we’ll move from there into an evaluation and review. In this phase, what we’re doing again we’ve taken a subset of our data to build a model, now we’ll run that model against a test holdout set of that data. So we’ll have trained a model and here we’re running that against some test data to see is it actually predictive? Sometimes the model may get overfit to the data you used if you have other data that doesn’t look exactly like that data, maybe it isn’t quite as good as you thought. You wanna do some work here to make sure that it is as predictive as it looks like it was when you were building that model.

Rick

Okay, so you’re actually looking at how well this information, this model is predicting the outcome that you’re hypothesis is expecting?

Mark

That’s right, is it performing as you thought? At this point we’re all operating here very much in the cloud, cloud platform tooling, usually a couple of data engineers, data science people can do this all this work. Here already, you might have some iteration. If this is bad, you may go back here and like, all right, we need to take a new approach. That can happen.

Rick

Sure.

Mark

But ideally, from here then we’re moving on to looks like we’ve got something that’s useful, now we’ll get into deploying that model. This can be straightforward if you got a lot of the things in place, but this could involve okay now we’re putting sensors in the field, maybe we have some edge processing that we need to do, so we might engineer a mini data center close to that edge where the data is.

Rick

So you’re gathering more data?

Mark

You’re actually gathering more data, but definitely creating the infrastructure to wherever the feedback loops that we want are available. So put a model out there and now what do we want to have happen? If we’re predicting tire failure, we wanna alert somebody that hey the tire’s about to fail so somebody does something. So here is the work we do to get the model out there, get whatever tech deployed to do the alerting. If we’re alerting a driver directly, maybe it’s notification immediately in the vehicle or maybe it’s to a dispatcher that lets them know, depends how we wanna architect it, but this is when we’re operationalizing the whole thing. And in the first rev of this, this may be a small group, you’re gonna wanna not affect your whole fleet ’cause when you go to the field you inevitably figure out things you didn’t think were gonna happen. At this point, you’re again validating all this stuff works in the real world.

Rick

Okay.

Mark

And making tweaks that will fit for your business.

Rick

when we go back to this first process, which is framing of the business problem, understanding these key components, defining some sort of targets of success that are important for the stakeholders in your organization, and then we start iterating on this sort of loop of processing. How long does this usually take us?

Mark

We can typically do a small run of this in a short time period. You might be able to do something as fast as four to six weeks. A lot of it will depend on where you’re at with your data. Sometimes there’s work that needs to be done to get that data in place so that can take a little longer so that data engineering piece, that will drive a lot of how quickly the rest of it will go.

Rick

In our experience, we typically will run workshops and help the executives and the engineers that we’re working with at organizations that our organization partners really help understand what’s possible so that they’re not stepping into mistakes that typically because of inexperience and not understanding this process real well, that they won’t do that. They’ll make good choices and we’ll help them so that they’re focusing on the right problem with the highest value and ultimately is gonna have the highest impact to their organization.

Mark

That’s it.

Rick

All right.

Rick

Thanks for watching this episode of Cloudup where we focus on the coolest technology delivered on the cloud. We’d love to hear your feedback comments on how you can use predictive maintenance in your own industry, how you’re making progress in this space and if you have questions or challenges that we can address here around predictive maintenance, leave a few comments, leave a few notes asking questions, and we’ll give you some swag. Thanks again.

Sarah

Cloudup is brought to you by Agosto, a leading Google Cloud platform partner. Like this episode and subscribe to our channel on YouTube to learn more. We would love to help you out. Visit agosto.com to learn more.

CloudUp Ep01: Chrome App and Extension Development In The Enterprise

Almost half of all Chrome users use extensions, but you rarely hear about Chrome extensions or apps for big companies. On the first episode of CloudUp by Agosto, we explore Chrome extensions vs. apps, why companies would want to create either and some of the coolest examples of Chrome extensions/apps in the enterprise today.

Meet the Speakers

Mitchell Steele

Google Chrome Sales Manager

Ray Pitmon

Solution Architect in Advanced Services

Transcript

Mitchell:

Did you know there are more than 180,000 Chrome extensions in the chrome web store?

Ray:

Yeah and that’s interesting.

And did you know that over half of all Chrome users have an extension installed?

Mitchell:

That’s crazy, that’s a really high number. I wonder why more enterprises aren’t developing and deploying their own app store extensions.

Ray:

I also wonder that.

Ray:

We should probably talk about that on this episode of CloudUp.

Mitchell:

Welcome to CloudUp, a web series where we explore the coolest things that are being built in the cloud today. Sponsored by Agosto. So for starters everyone’s probably seen or used a Chrome app or extension, but we want to start today by talking about some of the differences. And then maybe deep dive in on why you would use one over the other. So Ray, what are some of the differences between an extension and an app? I know we use both and we use them similarly, but they are different.

Ray:

Yeah, so an extension, a Chrome extension, you think of it more like a utility. So your features are unlimited and it allows quick access to some of Chrome’s functionalities. So there are specific API’s in the Chrome OS, you can access some of those API’s using an extension. An extension, you know you have probably seen before, it’s a little icon up in the top right, so it doesn’t have a full UI. So you can click on it and you can get a little bit of UI but it’s different than an app because a chrome app is basically a web app where you can have a full UI basically like a webpage.

Mitchell:

Yeah I know that an extension really changes the browsing experience in some way. And also an app mimics more of a native app, an app right on the actual computer itself even though it is a fancy webpage.

Ray:

Right, Right, Right. And the actual API’s are available different from extensions than to apps, so sometimes you need to use an extension to do things like install certificates and things like that, where apps we wouldn’t do that with. You might have an app actually call an extension to that, so sometimes when you use the two together.

Mitchell:

Alright, so now we know the differences between an app and extension, let’s take a few minutes, talk about how they are made, maybe why you use one or the other, and why you would choose to develop an app over an extension. And also like you mentioned how you use them together. But before we do that, let’s take a minute and acknowledge our sponsor, a fancy name on this cup here, and we’ll dive into that in a second.

Sarah:

More and more we’re seeing organizations wanna be strategic with their success, and part of that means their moving to the cloud. At Agosto we’re seeing a big uptick in clients using a Google Cloud platform for their online business operations. So whether you’re thinking you need a boost with AI, machine learning, or you just wanna build something new and fast, Agosto is an award winning partner who would be happy to help you with your needs. We’ve been Google’s cloud partner of the year multiple times. We got a few awards as well. And we would love for you to get to know more about us-

Sarah:

Okay I can’t take anymore. By visiting Agosto.com. You can get a free guide to your companies strategic plan for heading into the cloud, we’d love to see you there. Now back to the show.

Mitchell:

Welcome back to CloudUp. Now we’re gonna dive in a little bit and talk a little more technically about Chrome apps, extensions, why we use one or the other, how they’re developed, and really what they do. So Ray, what are they written in, how do they work?

Ray:

So, an extension is basically like a web app. So you write your extension using HTML, CSS, JavaScript. You can use a lot of JavaScript libraries that you normally would use to build a web app and just about any front-end web developer would be able to use their favorite tools to build an application that is a Chrome app, or an extension for that matter.

Mitchell:

So if I’m a developer, how do I choose whether I write and app or an extension?

Ray:

So, if what you’re building has a UI to it, typically you would use an app. So think of a chrome app as a web app. And actually what Google has done is they have changed things a bit in that now you can only publish Chrome apps for Chrome OS. Everybody knows you can run Chrome on Linux, and Windows, and Mac, whatever right and obviously on Chrome devices running Chrome OS but as far as apps go, you can only build an app for Chrome OS. So Google does have a path where they describe how you would build a web app to replace your Chrome app in order to kind of have the same functionality for all the users on the other platforms.

Mitchell:

Okay, so if you have a web app there’s a way to basically rewrite that easily to be a Chrome app and then it would be more of a native experience on a Chrome device.

Ray:

Right, you would build a Chrome app probably if you needed to have access to some of the API’s on the Chrome device, so the Chrome API’s give you access to the hardware so you can access serial ports, Bluetooth. Things like that , that you wouldn’t be able to access from a browser. So you wouldn’t have access to those anyway if your not using a Chrome OS device, so you would probably just create a web app that’s hosted by a web server instead of packaged together into a new app that’s installed on a Chrome device.

Mitchell:

Can an extension have access to those things as well?

Ray:

Yes, an extension can have access to those things as well, there are different API’s that are available to extensions and apps, so in some cases you might have to use an extension along with an app, we’ve seen that where we’ve had to have an extension that uses API’s to access information on the device, and maybe launches a web app or a chrome app.

Mitchell:

And that’s a good example of how you use them both together. Can you talk a little bit more about you mentioned you use the extension to launch the web app. Why wouldn’t the web extension just do all of it

Ray:

Because you don’t get a full UI in an extension, right? if you’ve used a Chrome extension usually you can click on the little icon on the upper right corner. That extension will show a small UI there, but you don’t have access to a full web app. Whereas with a Chrome app, you can do a whole progressive web app where you know there’s a full UI.

Mitchell:

What do you think are the differences between other than just it feeling like native app, what do you think the differences are between a webpage, so a web app or a chrome app?

Ray:

In most cases there really isn’t a difference. I think a key part to that is that you can access API’s in the Chrome OS with a Chrome app. So kind of the power of using a chrome device with an app is that you can access things that you wouldn’t otherwise be able to do so if you were just browsing through a web page.

Mitchell:

Cool. Let’s talk a little bit about some Chrome app extensions that are out there. So obviously as we said earlier there’s a Chrome app store, Chrome web store. It has over 180,000 apps and that’s growing everyday. More than 50 percent of users use some kind of Chrome extension whether it’s in an enterprise setting or personal life. Some of them are really simple, some are the things like, Google has an extension, it’s called “Office editing for Docs, Slides, and Sheets.” So it’s like simply just a way to open up a doc or an excel file in a browser and edit it, or recreate it. It also works as an extension for Google Drive.

Mitchell:

And then there is some other Chrome apps that are used out there heavily. One of them that we were using here is called “Grab and go.” It’s an app that runs on a loan of Chrome devices that just helps make sure that if I take a loan or Chrome device, it knows who I am. It uses some of those API’s to save some of that information in a cloud portal, and then manage that loan. So there doesn’t need to be any IT interaction. Ray what are some, we’ve done some custom development for some apps, what are some things that you’ve developed apps to do in the past?

Ray:

So, some of the apps that Agosto has built are, one example is around certificate management. So one thing about these Chrome devices that’s nice is that they’re really secure devices. Google has spent a lot of time on the security of these things, and so enterprise users or enterprises wanna have a way to enroll these devices into their

Mitchell:

into their microsoft ca

Ray:

Yeah, that would be one thing yeah. So we’ve built an app that allows for a white glove implementation. So imagine that you’re an enterprise and you’re ordering a bunch of these Chromebooks for your employees. We built an app that allows users to log into their Chromebook and it automatically downloads the certificate that allows those Chromebooks to connect to the corporate wifi, and it’s kind of a seamless process so-

Mitchell:

Is that a user cert, or is that a device cert?

Ray:

In a case of the, actually it depends on the implementation, so in most cases it ends up being a user cert, it’s installed on the device though, and it uses the TPM which is a crypto chip built onto the device to sign certificates.

Mitchell:

Okay. So it’s injecting that cert right into the TPM chip.

Ray:

Yup, so that’s an example of if you use some of the Chrome API’s to do that obviously, and you can only do that with an application or an extension.

Mitchell:

Okay. So something that can’t be done, can be done in both apps and extensions? Or does it have to be one or the other?

Ray:

It actually uses a combination of both in that case.

Mitchell:

Okay.

Ray:

So in one of the enrollment apps we’ve built, there’s a UI to it that might ask a user, if it’s like a retail situation and a user will get a new device, they’ll go to lo g in it might ask them to store a number, it might ask them their name or some other information. Since there’s UI there, we built an app for that and then when they click “go,” it would launch an extension. The extension would do the certificate stuff.

Mitchell:

So that’s another good example how extensions and apps work together. So, if we developed this extension, how do we get it to devices? So if I’m a developer at my company and I put together and built this extension to do certs, how do I get to all these devices, get it out in the field?

Ray:

There’s a Chrome web store, so if you’ve ever download a certificate, or if you’ve ever downloaded an app or an extension, you’ve gone to the Chrome web store. So one thing Google has integrated is the Chrome web store into this kind of enterprise admin console that they have. So as an admin you can configure all your Chromebooks or some part of them, or users specifically. When they log into the Chromebook it’ll automatically download whatever extensions and apps you’ve defined to allow them to download.

Mitchell:

Okay. So if I did all the extensions for my organization, does everyone that has access to the Chromebook, consumers have access to that app?

Ray:

Not usually. You can make it that way, but you have your own private web store.

Mitchell:

So there’s a web store specific to my organization.

Mitchell:

How about apps and extensions that I don’t want my users to have access to, what happens then?

Ray:

Oh you can blacklist apps. You can white list apps. So you have full control over what can be installed on a Chrome device.

Mitchell:

So, essentially I can make my own web store for my organization, maybe put some things out there that the user can go choose themselves to go download, like maybe have a small store just for my company and they can choose and then I can force installs onto directly

Ray:

Yup. You can add apps obviously that are from any of the apps from the web store. So you may have a combination of apps that already exist, and then if you’ve created your own apps of course, you can add those the private web store as well.

Mitchell:

Okay, that’s cool. You know I know we’ve also had this other node on Chrome called “kiosk node.” Let’s talk about that a little bit, obviously Kiosk mode locks down any Chrome book or chrome device, and kind of chrome device it could be a Chrome Box or Chrome Base Chrome Book, Chrome Bit to one application or extension. And it’s really use is for one purpose. Let’s talk about how that works with apps and extensions really quick.

Ray:

Sure. Sure. So kiosk mode is really cool because what you do is, again in the admin console that Google provides, you can configure a device by it’s device by it’s device ID to start up in kiosk mode, which means that nobody has access to log into it, it automatically starts an app that you choose. So configure this device to launch this app when it boots up. And that app in a lot of cases may not have a UI, or it might have a UI associated with it. So if it’s like a touchscreen device, people will create apps that are interactive but they don’t have to worry about people trying to go to a webpage or do other stuff because it’s locked. It’s locked to that app.

Mitchell:

Got it.

Ray:

There are several apps out there that do that, we had actually built an app called “Skykit,” that’s a digital display product that, same thing basically when you are in Skykit you get a Chrome device configured to launch the Skykit app and it really can’t do anything else, so

Mitchell:

And I know two other use cases that I’ve got , we have a couple other customers using, one of them using a kiosk app as a deli ordering system at a grocery store so I walk in and there’s a chrome tablet there that’s already launched to an app that they developed. I can order my deli order and it obviously sends it off. Once I’m done it’s ready for the next person. Another one I see a lot is time clocks. People lock down just the web page to a time clock when an hourly employee was to come in punching my I.D or scanning or whatever it might be, and it it’s still just locked to that one app.

Ray:

So Kiosk mode, from what I understand Google has actually created a rapport application that makes it easy for you to wrap a web app in kiosk mode. How does that work?

Mitchell:

Yeah so, there actually is a, we always call it a kiosk wrap around, I know it has a proper name but essentially it’s a small program extension or an app that allows you to essentially plug in a URL for a web app, and as long as everything you need to do is within that one URL and there’s no redirects or anything, it’ll create a web app for you relatively simply. That way you can like Ray talked about earlier you can plug it into your private chrome webstore, and push it out to devices. This is a great way to do, we talked about the time clock application, if that’s a web based app that you’re using now, it’s a great way to set it up and It becomes full screen, you don’t see any bookmarks or anything like that. And so it’s a really simple way to just convert a web page into an app that you can force install in Kiosk mode.

Ray:

That’s pretty cool.

Mitchell:

That’s all the time we have for today, thanks so much for watching this episode of CloudUp. If you’ve got more questions, feel free to leave them in the comments. We would love to hear from you so tell us what some of your favorite app and extensions are. If you’ve built one for your organization, we’d love to hear about it, we’d love to hear some of the challenges, or maybe things that you figured out on the way.

Ray:

And if you’re a developer and you’re interested in building an extension or an app or just trying to get started and you have any questions, let us know.

Mitchell:

Yeah, you could actually win a CloudUp swag bag. So thanks so much for watching, we’ll see you next time.

Sarah:

CloudUp is brought to you by Agosto, a leading Google Cloud platform partner. Like this episode and subscribe to our channel on YouTube to learn more. We would love to help you out, visit Agosto.com to learn more.