Azure Service Fabric: The road ahead for microservices  : Build 2018

Azure Service Fabric: The road ahead for microservices  : Build 2018

Past so quickly. I’m mark. We are here to talk to you about What is coming with service fabric and all the grand work we Have been doing to make sure the fabric and great distribute Assistance platform and mo kus on microsoft services and the Road ahead for us. We have seen tremendous growth In service fabric over the last year. We have thousands out of Customers running on our platform. It is interesting how the Conversation has changed when from three years ago when we Launched service fabric, now it is like how did you build these things? How do you use it all? we got over the hump of Understanding what a micro service is. It is accepted now that that is How you are going to build your services. Service fabric is something we built in the ground up. You can build these scaleable systems where you can build Everything from small pieces of code. When you look at our platform, We have a developer side of things out out of this box — But of course you can just run an executable, you can run Containers inside here. You can do your scale-out and Underline cluster. One of the most exciting things Is the release that’s just gone out, the 6.2 Run time release. We have all pieces in play where we now have the j platform so You can build windows, .Net framework applications on Windows, on java, java applications on linex and core Applications on windows and linex. We announced windows container Support and ignite. And you have been able to run Executables already. Right now with the current Release you can say with all the pieces in play, you can now mix And match and build an application existing of Containers, the reliable services and run times we have Provided or bring your own executable and put them together In one platform. We are pretty excited about Where we are and we continue to push out innovation with each Release on a cadence basis. We put out releases inside our Environment, running inside azure and stand alone two or three months. It is not only our customers but our own services. We showed the list of services last year that all build upon Service fabric as the called platform. The growth of these services has Been tremendous. We have added more. In the last year, all these services, everything from azure Container service, event grid, other Databases, my sql. We see new services lunched when we run surface hub We are very used to scale and used to dealing with how you run Large scale servicesand pretty much the service fabric is Becoming the de facto platform that we build most of our new Services within azure. Another exciting thing we did is opened source service fabric. Hopefully a lot of you have heard about this. Hopefully it is an exciting piece of news to you. Couple months ago we opened source service fabric and made It generally available and you can see the source code. We are going to develop that into an open source development Model within the next couple months.>> We are still working on that. But we are committed to it. It is coming.>> Service fabric is a key Foundation of platform we use to build our own services. We bet our own company on it. We have to make sure it runs pretty well. The great part is we give it away for free. There are two products we have, the on-premise product. You take it, install it on a set of machines, network them Together and in under a minute you have a network across the machines running. Of course we have service fabric that runs inside azure where you Get a managed cluster there for you and we take care of Upgrading like run time and helping you there, basically not Worry about hardware management, but effectively just manage a cluster. One thing we have seen a lot is a lot of people come up to us is People who say we — how is it that you can make sure that Instead of me digging around and finding all the packages i need To install locally, can you provide something that helps me Configure my local cluster more easily? one thing you’ll see that will Launch in the next few weeks is the ability to describe adjacent Manifest. You think of it like an arm template. It describes the list of machines given their ip Addresses and other configuration you want in there Like deploy a certificate across those machines an upload Something to azure. We’ll look at your adjacent Manifest and look at the type of deployment you wanted, linex Deployment, windows deployment and we’ll download the relevant Packages you need for the best configuration for the Environment that you want on to your local machine and configure it. You can stand up a local cluster here by simply uploading a Definition and we’ll pull down all the packages and build a Cluster for you all again with a single power shell command that Hooks up into azure. It will make it much ease yr few Yo to configure that local Cluster. Now you can walk away and say I’m going to take the freebie and use it myself. Now you can go one step furthers. People who run stand alone Clusters usually have azure clusters. What this means is because you are running in azure, we can Have a single portal experience now. We can see your stand alone Structures, as well as the azure clusters all managed through a Single portal experience inside azure and then from there, you Can do upgrades, you can do queries, certificate management Across both these environments, which we think is going to be Super cool in order to help you manage your environments a lot Easier than you do today. This is coming. This is how we are going to deliver you linex on-premise. If you have been wondering how you are going to get linex on premise, we are going to deliver It to you in this mechanism and you’ll see it in A few weeks. We still see the fact that there Is the word “cluster” in here. A cluster still amounts for over The majority of our support calls. When customerings phone up they Say how is it i scale up my Cluster? how is it that i manage Certificates in my cluster. There is still a lot of decision-making that you have to Make in terms of how you manage your cluster and manage the Operational side of it. There is decisions around the Operational space as well. You set up a greatway that Routes to all your services efficiently. We look at these challenges and Think, how can we deliver a better service fabric experience Than we do today when we take away more pain from you We are going to launch azure service fabric mesh, which is a New service. [Applause] [Applause]. You can simply build an Application and give us to us to run. Now we, microsoft, stand up large clusters of machines, Thousands of call machines, you just have all the fun of Building applications and deploying them and running them. If you were to think that in the stand alone world you have to Think about hardware, how do i manage hardware, my cluster and apps. In the azure dedicated cluster world today you don’t have to Worry about your hardware, but you have to manage the cluster Itself in application s. With service fabric mesh you just run applications and scale On your demands. Here is how we think about it. You still have all the capabilities that you think have A building service fabric applications from a micro Services perspective, but now you have a serveless Infrastructure approach. Now you don’t see any vms, you Don’t see any networks. All Applications, everything runs inside containers, all Applications run inside their own isolated network. It is a multi-tenant environment where you take applications, run Them inside service fabric mesh. We take care of many Capabilities around here in terms of how you route the Messages across things, how we deal with certificate management. You see as we go through this presentation, that because we Have introduced this offering, which is a serveless offering, Where you only run the applications you want at scale, We have introduced all sorts of capabilities into service fabric Itself that actual ly advantage in the clusters themselves. Of all the products. So we are super excited by this. We are going to show you some demo of this right now because There is nothing like getting to a demo early. I’m on 7.>> You are. Go.>> Okay, so what did you expect? you do az mesh. What do you see? you see here the ability to do Application deployment. You can see the ability to Deploy a service. Think of this now as an Application, a service fabric application as a resource type Inside azure. Service is a first class Resource type inside azure. I can describe an application And set of resources and simply upload those into azure and Deploy an application consisting of a set of services just like You experienced today. If i do az mesh app list, you See that i have actually today, already have two deployed police station. How do i deploy these applications? i do these azure mesh deployment And i create — upload a new adjacent template. Through the az mesh deployment command and this will talk to Our resource provider inside service fabric mesh and deploy a New application based on this armed template. While that is deploying, let me show you what that looks like. As you fully expect, what is it when you deploy something into Arm you expects to see an arm resource type. Here we are.This is my first application i’m deploying here. Hello, world. You see that, just like you Expected to see with everything else, a set of resources here. I have a service fabric application type. What’s inside the service? inside the service you have a Service name, an os type that is running, and this itself has a Set of code packages. This service gets deployed here, This hello world service rngs, as a single account that will get deployed. If i look inside the code package you see that this Deploys a single image inside here, which is a hello world Windows server called “image” for that application i’m day employeeing. Everything runs inside containers. This has a particular name for That code package that gets deployed. You can see i open up an end Point here with a hello world listener for this particular image. I can then set resources inside here. This is how you can set the Amount of resources you’ll run. I want to run this particular Container with one cpu and 1 gig of memory and that is all you’ll Pay for. Any time you can change these And redeploy this. At the bottom you see there is a Network reference. What does that mean? I go to the top here and open up and look at these resources Again, you see now that i have defined a service fabric network here. What does the network do? well, all these services that Run inside the application run inside their own isolated network. I have said this network, open up a port through the azure load balancer. Open up port at for the hello world application, for the hello World service, for this end point and through the azure load Balancer so i can access particular Container image and i now have an application Consisting of a single service with a port that opened up Through the azure loads balancer. Meanwhile, i have been talking here, you see this application Has been deployed. I’m going to do my az mesh app list. I have two hello worlds. Now i can do az mesh network. List. And you’ll see that i have Deployed alongside these services here and these Applications, a set of networks also deployed. While this particular application i deployed here Inside the app 10 resource group, this is this application Just here and you’ll see it opens up this public end point. Let me grab this here, go off into your favorite Browser, type this in and tada! That is your first hello world application running in service Fabric mesh. [Applause]. An awful lot has happened here, But think how simple it is now. Literally you have just written An arm adjacent template file describes the application and Services. You can combine this with all The other arm resources. I can show you how you change This, redeploy, scale out a number of instances of your Container, change the definition of your app and many other Things. This is your hello world Application running inside service fabric mesh. Let me show you this here. You can look at the hear inside the list of resources. Inside here. I can do az mesh service list. If i do this for that service, you’ll see that when i execute This you see inside here, that for this single hello world Application i’m running here, there is a service running in Here up the full query capability of the hierarchy that Consists with the application running with your set of Services. Pretty cool, yeah?>> So service fabric mesh do you can see is focus on Deploying applications over infrastructure. It is not just service fabric Mesh where we are talking about doing applications. It is actually service fabric in general. Service fabric mesh is a way to Write applications an run them, but service fabric is evolving To be an even more application centric platform that runs Anywhere, including in azure and on the mesh. Different kinds of applications You can write.>> So we see two major scenarios. We see the scenario that’s happened frequently now where People take existing code and they effectively modernize it Which means they take code, package it in containers and Deploy it into environments where They can scale out and modernize an existing Application and you build new services on top of that. Last year, this time of the talk we showed a lot about Modernization. You’ll see it in a lot of talks Here at build.>> Even though mesh is Serverless, we want to enable you to bring your existing Workloads and run them on mesh without having to worry about Vms ortrack infrastructure. We are talking about cloud Native applications. I run applications that are e Designed to run on the cloud or within service fabric and we are Trying to make it easier and more Generic for you. [Indiscernible] is something we are looking into where you can Bring in any language and any service, enhance those Applications with our built-in state stores like reliable Collections, and then to be able to interconnect all those Services together through intelligent traffic routing Without having to do your own service discovery and traffic writing yourself. What this means for us is that there is a couple of changes we Have to make to the way applications are structured and Developed and the way they are run. In order to be able to run Across all these environments, everything from your local Laptop, all the way up to this serverless environment running in azure. So what we are doing is introducing this new concept of Service fabric resources. Resources are basically Individual decoupled things you can deploy up to service fabric. Now, this stands next to the — otherwise you’re used to writing Service fabric applications today, whether you are using Application or service manifest this gives you the full gamut of Control and integration up to simplicity and portability where Docker compose is the most portable since there is no service fabric there anyways. It is just docker composed piles. In the middle is this land where I can write something that will run anywhere on any environment And the only thing you really have to change is the level of Integration that your services have with the underlying run Times. So for those of you who are Developing on service fabric today with reliable services, For example, you are used to having life cycle events and Deeply tied into the run time, to the point where your services Can actually hold up the run time. So if there is a run time Upgrade rolling through in your services and you needs more Time, you can actually hold it up. In a shared environment, a multi-tenant environment, that Won’t fly. Everything has to run in Containers in this multitenant environment because it is a Shared environment and you need that level of isolation. If you go to the bottom of the stack there and you are doing Application and service manifest, you still have full Control and that is still fully supported and always will be. You’ll be able to have full control of the platform, be Integrated in the run time and do all the interesting work you Can do today with its, but in this middle-land it is geared Towards just focusing on application and not managing the Infrastructure or being involved in the infrastructure lifetime. It is just a simplification. So let’s talk a bit about what Service fabric resources actually are. This is just a way to say that anything you deploy to service Fabric is considered a resource. They are all individually Deemployable. Today that is mainly just your application services. You can still do that. On top of that there are Additional resources you can deploy like networks, secrets And volumes. These are all shareable across applications. You can share a network and deploy applications into that network. You can put up a secret and have other services access that secret. This opens up the playing field for us to — how i want traffic To be routed and scale out rules and all kinds of other stuff. We can add these new resources and update existing resources Without having to change a central schema either. This is a new way of doing things here. When you write these resources, these are simply just yaml or Jason documents. This isn’t just a concept in the New mesh service, this actually works anywhere that service Fabric runs. It is just deployable to your End point like you always do. When you are in azure and Deploying two mesh, you are offering arm templates, azure Resource manager templates and that gets deployed to the Resource manager end point. It doesn’t mean you have to take Ct-all your resource files an convert them to arm templates. Service manifest, but the idea here is to kind of unify all That so that the guy writing in js app doesn’t do anything Different from the guy writing in the core app. That includes the apis and libraries you use. Instead of having frameworks that lock you into the platform, We don’t do that anymore. We give you libraries to use. You pull these libraries in and that gives you the functionality That you’re used to like reliable connections and client Apis so you can interact with the cluster or in the mesh Environment, interact with your applications to manage them. This is a bit of a departure from what you see today. What i’ll do now is switch over the visual studio and show you What this looks like. I’m number eight. Thank you, sir. Here is a simple application in Visual studio. If you are a service app Developer this may look familiar to you. This is our quick start Application or voting application. What i have on the screen, this Is the service resource. This is what a service resource Looks like to describe a service that is going to run. You notice a couple things. A lot of the service fabric Pieces that you are used to are still there. There are still code packages. Each code package just defines An image, a container image to run. This is what i meant when i say that regardless of what language Or platform you’re writing on our framework you’re writing in, You describe the service the same way so you don’t have to Add extra attributes or tags to get it to run. It all looks the same. Every code package you’re on Describes one container and what the container needs to run. Environment variables, the resources, end points, et cetera. You can have as many of these as you want and they’ll run Together in one service. I have a network ref, which is Referencing a network description. This is where you decide, how do I want to set up the network for this application. How do i want traffic to come in. When you deploy this into azure Service fabric mesh this will automatically configure ports so You don’t have to mess with the load balancer. You can just deploy this. These are plain vanilla applications. If you have done reliable Services in the past, you notice this entry point to the program Is just what you’d normally expect. There is no service fabric run Time, service type registration. All of that stuff that you had In your code that immediately tied you to the platform is gone. You don’t have to worry about that anymore. I can take the same application and run it anywhere else. I can run it on my local dev box in i want. You are no longer tied into service fabric the way you were before. The other thing you notice is the concept of service types an Instances isn’t really there anymore. That is something that we have Sort of abstracted out. If i go back to the service Resource, you’ll see that what this is defining is just how the Service is going to run. It is not really defining a Type, from which you’d create instances. If you don’t know what i’m Talking about, don’t worry about it because it is not there Anymore anyways. The whole point is that it is Simpler. It is much simpler, easier to Describe and run.>> It is all about making life Simpler for developers.>> It really is, yes. Let me just do f5. I’m going to run this on a local cluster. This is just service fabric. I want to show you something Here with these two applications or services. If you lock at how the services communicate, it is just grabbing Environment available variables to figure out the host name. There is no discovery apis. This ran as is outside of service fabric and should run The same way internally. I’m going to come back to this Later. What this is doing is — this is Your typical — i need to put state somewhere so this is Putting it into a .Net dictionary here which means it Is going to go away as soon as the application shuts down. We’ll come back to that in a bus and show you other ways that we Are providing to store state in different ways. I’m going to put a breakpoint in here. We’ll see if the application Works. I’ll say do this. So should hit my breakpoint. That works the way expected to So big deal, right? here is the cool thing about This. The breakpoint i just hit is Actually running inside a container. So when i hit f5 that created a container image out of my Application, put it into a registry, spun up the container image. Inside my breakpoint i’m running inside the container image and That took about 30 seconds. It is pretty fast.>> They did an amazing job on the speed of debugging inside Those containers.>> The speed and debugging Inside the containers now super fast. All right, let me Close this guy down. Now back to powerpoint. Tell us more about some of these Interesting resources.>> One of the things we hear All the time is about how do i manage certificates, secrets, Everything at the application level. We integrated a key vault Extension into our cluster Deployment. Now we download the set of Certificates across the cluster machine vp you can do auto Rollover inside the cluster itself. Doing things like common name Support so you can roll over those certificates for the Cluster security. All the time we hear people say I need certificate and secrets management like the application At the service level. This is what we have built. We built a new service into service fabric called a seek Restore service which allows you to manage certificates and Secrets at the service and application level. This gives you a couple of other key advantages. The one thing it is going to do is it is now going to provide What you need inside azure. I need manage service identity At the application level and service level. Now, because i have managed service identity, i can go off And get other keys from key vaults and do things. That is one of the key aspects you’ll see. With that secret store it means when you’re running an On-premise location, if you’re not attached to azure and key Vault, we can store all your keys there securely and mange Your certificates part of that. The secrets that you manage can Be inline certificates as ones you provide, or they can be ones You pulled out from key vault. Effectively, this means the Service can take its identity And register it and now we can reach out to key vault and say Give me a set of secrets i want and it can use that to Authenticate with services inside azure. What you’ll see now is that as a resource type you can define These secrets. I can define independent of Using them in any particular application or service. Now, just as you saw how i referenced that network inside the application, i can now just Reference using these secrets if i want to access this particular Other azure resource like a cosmos db. Now all the sudden your Certificate management is taken care of for you. The auto rollover of those secrets is done for you. You just have to care about writing code, referencing the Secrets and talking to the back end service you want. It is cool having this as part of the core part of the platform. As we keep saying, this is built into the platform so you get This in all versions of service fabric, not just service fabric But inside the existing clusters that run. The other thing is that we have spent a lot of time — it Wouldn’t be a specific talk if we weren’t talking about state Because we love state as the orchestrater. We love to make sure we deal with state. Of course one of the most common ways you do that is you are just Writing file io inside your code. You are doing io operations Write out to your local disk. So there what we wanted you to Be able to do is you see we have deployed all those containers. Now you can just hook up volumes to those particular containers And hook up volume drives to those containers of different types. We have built two types. We have built one that is for Azure file storage. What that means is that for your Service you can do file io operations and it will persist That data into azure file storage for you. Or we have built a local service fabric volume driver and what That means is that it talks the volume, we’ll write out and do The application with service fabric storage just as we used today. Here you see it as an attached volume. Using the local replicated disk you get high latency and Independence of network storage. Just to show you how this is Just a cool part of the platform, if you go off right Now to this er, We shipped a preview of this. You can download it and use it In terms of your current deployment you have in terms of How you have deployed your applications with the current Model in there. It is a great example of how we Are bringing these capabilities across all versions of service Fabric because we want you to be productive with building volume Disks, whether the container is running in what you have today Or within service fabric mesh. Cool stuff. We are going to look at how we Push more of these volume drivers out.>> Let me show you what this looks like. This is really cool. This is all really cool. Going back to the application we Were just looking at where i said we are storing data in a Little dictionary and that is stupid. What you can do is take that exact same application, the same One in the back end here. I have ef core going here and I’m storing data inside ef core instead of a dictionary. In this example we took ef core and are backing it with sql light. We are telling it to store your data in this database file in This director. Ordinarily this would put a file Into your container and when you tear the container down you lose the data. You don’t want that. So instead you go back into the Service resource and you are going to say i want to set up a Volume and mount that volume to that path. Whenever someone writes files to that path, it is written to a Mounted volume. In this case that volume is the Azure file storage volume. Here i can set that up. Now everything i write to that data directory is stored on azure files. You can put in our service fabric volume disk so if you’re Not running in the azure environment — you can do this Anywhere, anywhere the service fabric runs — and you don’t Want to have to manage an azure account. You can back it with reliable Replicated out. Either way you want to do it is Fine. You can see here also in this Volume, we are referencing a secret, so i obviously don’t Want to put plain text keys in here. This is that secret that mark Was talking about here. You can see in this case, i have Just inlined the secret. I have enscripted it and In-lined it. The better option is to store it In key vault and load it in at run time and you can use it that way. That is how the volume thing works there. There are a lot of applicationings you can do with this. I think there is another talk tomorrow that anthony is doing That is going to show you a little more on this topic, so Make sure you check that out. We’ll tell you more about that In a bit. If you want to switch me back Here real quick. How about diagnostics.>> We have done a huge amount. In the 6.2 Release we just Pushed out we did a huge amount for containers. You see that in the logs for the con containers and we have Continuously provided you with the ability to take diagnostics And pushing them to app insights. We push out through — we can push them to local disk. The container, the service fabric map insights, we can push Into app insights to be able to view them inside the azure portal there. And what we are also seeing inside service fabric mesh is That the underlying host itself will have agents running inside It all that will capture the container metrics from those in The container events. You have to push those out into Azure monitor. You’ll see container up and down Events. You’ll see the metrics for the Containers running, you’ll see memory usage, the cpu usage. From a portal experience you’ll have a view where we contain our Images an we can see app insights for the application Level events. You’ll see the container Diagnostic events inside there and you see azure monitor for The run time events from the underlying platform itself. We are very keen on making sure that you get rich diagnostics Around all these things and see the state of what application is running inside that.>> Reliable collections. What would a service fabric talk Be without reliable collections. We are doing cool stuff here. We have done a lot of structuring in if reliable Connections to enable this vision of [indiscernible] that Can run anywhere. We have separated out a lot of The reliable collections code out from the run time into Separate libraries. Then we provided different Language bindings on top of that so you get apis to work with These data structures in a bunch of different languages. By separating it out we can do a lot of cool things with it. For example, this is another thing that i think, if you have Written code in service fabric before using reliable services, One of the things you know is that when you write a staple Service — we have done an okay job of kind of abstracting the Staple stuff out, but you are still writing staple code, basically. You are still kind of part of this run time life cycle where You’re sort of attached to the run time as a staple service With staple replicas, we need to tell your service that you are Going to change from a primary replica to a secondary replica. A lot of that is does for you. Your code is still in the path. Which means that for example, if are you can don’t honor this Cancellation token you are given the system gets held up. Le connections out into this Library, we are able to — life cycle for your. Your service code is now completely separated out of the Life cycle. It means you are writing staple Services and co-locating your data with your compute for Memory reads, but the way you write your code now feels stateless. So when you write your code it feels like a regular old no jso app. You’re not necessarily inheriting stateful service and You’re allowing reliable connections to do that for you. The other thing we were able to do is give you transactional Storage anywhere you run. This means even if you’re not Running on the service fabric run time, you still can use Reliable collections and get local persisted transactional Data structures. You can run an application with Reliable collections and you don’t even have to have the stk Installed on your machines. You don’t even have to install The stk. You can deploy it anywhere. It just works that way. Of course, when you do end up Deploying it to service fabric, what you get is replication for high availability an Partitioning for scale out because the platform will manage That part for you. It is the same code. You don’t change your code at all. You run it somewhere else and You get replication for high availability and partitioning for scale out. I’ll show you what this looks like real quick if you want to Switch me over one more time. Yeah. Okay. What i have here is — i just Took that back end service, just the data service from our voting Application we are looking at. I isolated it into its own solution. You see on the screen here, the code. This is the reliable collections code being used in that asp net Core controller. This is just a web api Controller. We have done a few things. We have changed the apis around a bit so we have done this work Of creating transactions for you an wrapping inside this Transaction context so when you get errors we can handle that for you. You don’t need this giant tricatch block to handle every Exception on the face of the earth. We do that work for you so your Code is very simple in this case. This is just a stand alone asp Net core app. If i debug this, this doesn’t Deploy to service fabric. This is just a Console app. I can hit my breakpoint in here And i’m actually using reliable collections in here with no Service fabric run time underneath. This is something i can do, for example, if i’m debugging an Application or if i want to write something up real quick or Maybe i don’t know where i’m going to run it and i don’t want To put any service fabric resources in api. I can run it on its own like that. When you do take that same Application — let me show you what this looks like now if you Take the entire application we had before, pull out the in Memory dictionary and put applications in. The code is the same as what i just showed you. I don’t need to change any code, but i have added these resource Files to describe how to run it in a service fabric environment, Wherever that is, whether it is on service fabric mesh or On-prem or in azure. It doesn’t matter. It is the same code. This is a stateful service, with Reliable collections backing it. You can see it is just a regular Old service. The only thing i have done — i Say use the reliable collections in my asp note core app. These are just libraries you pull in. By doing so, you get access to Reliability collections. When you do run it on a local Machine or anywhere else, it is replicated for you by the Platform underneath. Let’s do it, run it. Again, as you can see, these are also running in containers as well. This is reliable collections being replicated inside a Container with a debugger attached to it at the same time. This is cool stuff. When this application loads, it Is the same app but now just replicated and highly available. There it is. We should be able to hit our breakpoint here as well and get Into the reliable connections codes. Now i have my transaction. As a programmer, same code. Just running in a different Environment and all the additional benefits of the Platform underneath it. Want to switch me back.>> That is super cool.>> Not bad, huh? About how you are describing services How do services talk to each other. Now, the goal with this is we want this to be as simple as possible. The idea is that if i do have a polygod application made up of services written in all Different languages, they should all still be able to talk to Each other using simple domain-based, dna-based lookups. The clients that i use — it should just be the regular old Vanilla — whatever it is the language provides. I shouldn’t need to pull in an extra library to do this. I should be able to take any code, put it on a service fabric And all the discovery mechanisms and everything just kind of works. That is the idea. That is what’s behind this. Very, very, very simple. You shouldn’t have to implement Any platform-specific discovery ap,s. If i have to talk to your discovery api and deploy it over There it is going to stop working. From a service code perspective It shouldn’t like a discovery mechanism underneath at all. On top of that, services should never have to deal — It is probably a bad thing. As a network errors an To be coupled to the fact that there is a network underneath or the fact that it Is architected a certain way. Finally, this is super important. When service a calls into service b, i shouldn’t have to Know anything about the way service b is implemented. I don’t care if it is serverless or stateful. I don’t care how it is partitioned. I don’t care what version it is. It shouldn’t matter to me Calling to you what your implementation Details are. These services should be Agnostic to these details.>> The number of questions we Get on this particular topic each week is unbelievable.>> It is hard to do. It is a difficult thing to do Today and it is something we are trying to fix. The way we are doing this is we Are partnering with the guys working on envoy to bring envoy into service fabric. What this means is that when Your services are deployed into service fabric, they think they Are talking to each other, but requests are being routed into Envoy proxies and they are being configured by these resources, These rules that you can basically upload into a control Plane that gets delivered to these proxies. We have built the envoy apis on top of our own service discovery Mechanism, the naming service, so that can feed information About the cluster and about the location of all the services Into this network of proxies, so that the services themselves don’t have to deal with any of Those sort of details. In this case, you get advanced Hdp traffic routing rules, so your ingress routing can set up Rules to say i want site a.Com to go to service b. Normally you’d have to right yourself on the front end. You no longer have to do that. Same with partition rez lugsz. When a service wants to talk to another service, if that one Happens to be partitioned, how do you know which partition to Talk to? that should be an implementation Of the upstream service, not the client or caller. The caller shouldn’t care. The upstream service is going to Tell proxies here is how i’m partitioned and here is how i Want you to send data to my partitions. All you do is say this is the piece of informing i want you to Pull out from the request. The proxies do the work of hashing that for you. You don’t have to worry about how to Partition your data. All of this works nicely. You can go back and configure that yourself. That capability is always there. If you are an application developer that doesn’t care About that stuff and you just want your services to scale, This will allow you to do that without having to do all that Additional work. This brings us to the future of Service fabric which is basically a set of polygloss Services in any language and adding additional resources to Help enhance those applications with reliable connections for State storage or writing to a volume-backed file that is Backed by reliable collections and interconnected through Intelligent network and –>> all of these happened across All versions of service fabric. So you can think about this with Your running your existing clusters today. Everything we talked about can Be available inside there. You have your Standalone clusters available inside there. And service fabric mesh where you don’t have to worry about Any cluster management or hardware configuration. You just enjoy being a developer, writing code and Building an inside visual studio and any other tools we support Such as vs code, to build those simple files that get generated Into the definition you saw deployed inside there. We are pretty excited about this Future direction of things. We are excited about how we are Making your lives a lot easier to take away the upside of Things and how we run things inside azure. It is all about simplicity and up-levelling the application. You hear, how do we build microsoft applications at scale And how do you do it with best practices and guidelines. Well, we are building that core into the platform so you just Have to think about business logic that you run inside all of that. What do we have next? i think –>> Want to see some scale?>> one last demo. Let me switch over to — i just thought i’d show you this. Once that application — ef with this simple portal here. You’ll see this portal experience. Here is that network defined so You see all those service fabric visuals we talked about. You see the secrets, the volume drive, you’ll see the Application, the services, all inside the portal here. As we release service fabric mesh you’ll have an experience Inside here to be able to look at the log os, hook it up to the Azure monitor, and effectively see all the diagnostics of it Coming out of your applications integrated out of the azure Portal experience. Today we have this minimal Integration round and all that. Let me show you one last demo. I have an application here. If you’re familiar with some of The classic service fabric, we have this bouncing triangle Which has been a service fabric application we have had many years. It shows how we kind of do scale out and upgrade. I have deployed this application with a single instance of a Service that has a single bouncing triangle. I can do this command now where i’m going to do az mesh and i’m Going to deploy into this resource group, new template. This one scales it all out. Let me just do this for you first. I’m going to go in and do this Deployment. Make sure it kicks off. We let it run in the background here. We’ll shrink this down here a moment. You can watch that there. We’ll see this starts to — if We are going to look at the manifest for this — i’ll give It a moment first because it deploys the first one pretty Fast and you see it will start scaling out more instances of The back end. This particular web application Consists of a web front end and backer end. The web front end is doing all the rendering and the back end Is doing the calculations for any given position for where These triangles are. This new deployment i have done, Which is the scale out file — let’s go and look at the Manifest for this. Inside here this is the Manifest for the base image i just had. It consisted of an application, a single web front end service, A worker back-end service here. And i had a single instance Replica running for both of those. The actual deployment of the web Front-end was this particular image here, which was just a Web-front end. I deployed this on the port that Was listening. Then, of course, the actual Back-enworker role that was running, it had this particular Container image inside here. Notice that it doesn’t have a Port exposed. The one i just deployed out, the Scale out windows one. There is no difference between These templates. The only thing i did was an you Arm template upgrade. If i switch back here now, You’ll see now that i scaled out by service from a single Instance of back end one to — [applause]. So the simplicity was e nor — enormous. Imagine if you want to do that and you want 1,000 instances of Your application, change it to 1,000 and you’re done.>> If i only have three vms, now i have to add more vms.>> Correct.>> I don’t have to do that Anymore.>> Yes. Don’t forget, you are only Paying for what you use. The whole point of service Fabric mesh is you have chosen to deploy four container ridges. You are paying for it in terms of the cost. It will be exactly the same cost of azure container instances that you have. That is for the cost of how you see particular images, Containers that are running. You are paying no more for the Resources you are running inside the hosted framework where we Are taking all pain and difficulty out of your infrastructure itself and you Just have to describe your application and deploy it all. I can quite easily change any number of instances. What is the next thing i can do? effectively i can now — if i go Back here and take this next azure command inside here, i Have now gotten an upgrade command here. This one is still running here. This upgrade may or may not work. Hang on, let’s — at this point In time the full — you see here The deployment is finished in here and this one is scaled out. This time i’m going to do a different deployment and it will Be an upgrade deployment. What do you think the difference Is between this version of the deployment i have done for my Previous one? well, it is simple and i’m sure You can guess. The only difference this time is That i’m upgrading one version of my container Image and you see that this is the exactly Same application again. This is my application here, Which is my set of services. This is my application. I still have my front-end web application here. You see inside here, just as Before, nothing has changed inside here. It uses the same image. The difference noise is that on This particular version i’m working with, if i open up the Code packages, i have updated this to a rotate version of my Container image. I built a new container image, This version of this container image as a Rotating version of those triangles. Now i have deployed this inside Service fabric mesh.The only difference i have done is that There is one line here that’s divided me a new containerized image. Now what we’ll see over time is it will start to shut down and Do a rolling upgrade across my cluster just like you understand What service fabric does today. You have all the benefits of the Health checks, all the guarantees around the Consistency around that and as it rolls out through my upgrade Domains you see new version of my Container image uploaded and you see the upgrade will take Place across that. This is running a set of windows Container images. We see that run a moment and see If it upgrades those. We’ll see how the upgrade has Done at the end of the talk.>> Downloading a few hefty images.>> Yes, on those window container images. The whole idea here now is that you as a develop er Described everything inside your file and deployed them inside Azure. There we go vp there is one of Now that’s been upgraded by pushing out a manifest Definition in service fabric mesh. It is a pretty spectacular way For you to deploy and run applications at scale. I wish i — we’ll come back to this and see this at the end. That was scale and upgrade. Scale and upgrade now is as Simple as how you just redefine characteristics in many your Application and submit them into service fabric mesh and of Course, we are also going to bring a lot of those benefits Down to other versions of service fabric. These are just some of the customers today that run in production. As i said at the beginning, we have seen phenomenal growth with Service fabric over the last year, anyone from running a few Machines to machine — to clusters of thousands of calls. Three particular customer who Have come this year to talk about service fabric, honeywell An a-source all have talks. The Eccentury one is particularly compelling. They took a large number of their internal applications an Migrated them to service fabric clusters and reduced their cost Of ownership because they did containwithin windows service. I recommend you listen to those talks an learn how they are using service fabric. They are compelling stories. Other sessions we have here at Build that are having service fabric include corey’s talk Tomorrow where he is going to focus on modernization. And taylor brown’s talk on modernizing with windows server Applications around the one that follows him is .Net applications And they’ll talk about how they have used service fabric Extensively to migrate through extensive code inside azure. The one talk you should not miss is mark’s talk tomorrow. There is one area else of service fabric that we are Pushing into very strongly and that is service fabric Running on the edge. In mark’s talk he is going to have an incredible demo of where He is going to use service fabric running on edge devices, Showing how service fabric on those edge devices provides high Availability on your edge applications. Now if your individual devices Go down, your applications taking the full power of service Fabric for the clustering technology we have there and for High availability on the data and to compute and run on edge Devices. We’ll show compelling devices there. We have the edge side of thing, service fabric Standalone today and our existing clusters and with service fabric mesh we have This full spectrum of offerings that allow you the Ease of use and development. Finally, there is a great session tomorrow where one of Our cloud developer advocates is going to dive more to service Fabric mesh demos. He is going to go more into the Volume drives. What else?>> he is basically going to Show you some of the more developer-focused side of this So you’ll get more deep dive of what we showed you today.>> I know what you’re saying is, like, okay, you have talked A lot about service fabric mesh. How do i get a hold of it? Well, we couldn’t quite get it to you ready for build just yet. But you can sign up for a preview. We on-board people on a very Regular basis. We have about 500 or 600 people That we have on-boarded right now. We have be on-boarding people in batches. It is coming in the next few weeks. It is closer than you think. We are very exciting to get this into your hands. We are very excited to talk to you about the fact that this is Close on the roadmap. Believe me, it is closer than You think in terms of its availability. It will be coordinated at the Same time with our 6.3 Release which allows you to have a Single integrated developer experience from the stk side of Things where effectively you can build visual studio applications As you did today and take advantage of these source files. We are excited to hear your feedback. Sign up for the preview and We’ll on-board peel — people as Get space. Visit us on git hub, talk to us On our twitter account and come down to the booth. We have hundreds of these t-shirts to give away. We have enough for everyone in this room, at least. We carried in hundreds of boxes yesterday and nearly killed Ourselves so please come an get a t-shirt because we are not Taking them home. We had to find the money to get The t-shirts so you better come take a t-shirt. And we are just super excited about everything that’s Happening with service fabric. There is roadmap particularly With the fact that we now want to provide an experience that You developers can build the most cool things you can think Of and have to deploy those inside azure and provide a Platform that you think can build to deal with the scales And demandses and all the exciting things you build. Thank you. [Applause]>> We are happy to take questions. There are microphones On either end.>> All three of our — this was the upgrade of our [indiscernible]. Let’s take some questions. If you can go to the microphones for a Bit that would be great.>> How to choose between Serverless service fabric and the functions?>> Oh, okay. So the question is, how to Change between serverless fabric and functions. As you saw with service fabric mesh you can deploy anything Inside a container. You can take the function’s run Time, build the run time inside visual studio, deploy it in the Container and deploy it –>> how do you choose which one?>> Let’s just say we’d love to see a roadmap where the Offerings of those two converge. I mean, i’m saying that right Now you could take that function and run it inside there, but i’d Love to think that those — service fabric was designed as a Long-running system today and functions is designed as a Short-lived system. In this world, right now you can Run a functions run time inside there. You can do things with service Fabric.>> I wanted to follow up on that. How do integrate [indiscernible] from service fabric.>> Can you repeat?>> How to describe to [indiscernible].>> Good question. One resource type we didn’t actually talk about that we’ll Introduce will be an event grid resource type. So you’ll see an event grid Resource type and that will configure event grid for your Deployment and pull in all the events for that. Grid. We love event grid. That resource type you saw, think of event grid resource type. When you see that come the whole thing will make sense. It should have made sense anyway.>> How do you recommend cycling Stateless services, or is that not something that you — does That indicate other issues? we have an issue specifically Where we have some things that are Being whatever reasons. What are your thoughts on that. How does that Strike you?>> are you asking how you’d do That with mesh environment?>> or on-prem. There isn’t a restart service button. It seems like a logical thing You’d want to do.>> In the apis out today, there Isn’t a thing a — well, there are actually fault commands that You can use that will allow you to restart individual processes. We call them code packages. You can restart the code package Which can restart the process. If you find it is failing on a Specific note and you want to be able the restart it, there is Command that will let you restart the replica itself. But that doesn’t necessarily take down the process. You can restart the entire code package which will take down the Process and restart the process again.>> Okay, it sounds like there Is a hack to do this.>> No, it is just a command in The apis. It is restart-service fabric Code package or somewhere along those lines. Those commands are already built in. It is part of our fault injection commands.>> You don’t have any particular negative impression Of needing to do that? that doesn’t –>> well, i mean if you are Having to do that there is probably something wrong –>> Well, caching configurations.>> We can take this offline. We can give you a long answer. Let’s just take a few other questions.>> In our instance we are using Cloud fabric. In some of our deployments we Have had issues. One of the things we have seen Online is you rdp into the fabric to get to the event log To be able to be see where your deployment fails an everything. It looks like mesh doesn’t have the same capability; is that correct? What is the way to work around this in that scenario?>> The question is — yeah, service fabric mesh, everything Runs inside containers. You will be able to connect up Into the container and see everything inside the container And you’ll be able to have an interactive connection inside The container and see it from inside there. Effectively, that is your vm. That is the world you see. You don’t know what it is running on our you care about. You’ll be able to see that level of detail around your Production-level debugging, if that is what you’re asking for.>> I mean, those events we want to improve on anyways. As an azure monitor you’ll be able to see all these events not platforms. Those things should go out where you can see them so you don’t Have to rdp into a machine anymore. You’ll be able to get into the Container itself, but a lot of those things will show up in the dashboard.>> Think of the container as the vm-level thing. Let’s take this one.>> We are already using the [Indiscernible] with sgtps, but we have to copy the certificate Private key into the solution. It doesn’t take it when we add The certificate in the power shell. The power key is not taken when We use that so stpps is not [Indiscernible] is in the server. Is it fixed in the new release, The new version?>> the question is about — was It about how to get certificates in a way that –>> The sttps end point doesn’t work when we put the Certificate, the private key or power shell. We need to copy in the solution, the Pfx file. It has to be physically there in The [indiscernible].>> I’m –>> [Indiscernible] but it doesn’t work anymore.>> We should take a look at That. Come down to the booth in a bit. We should look at that specifically because i’m not Entirely sure.>> If you want detail design Questions we’d leave to hear you. We have three days of boothing we’ll dive into. Show us that one.>> Scale related question. You showed the example of how you scaled up your services. In the current service fabric world i have an auto skillset Where i say depending on how many resources are become used, Memory or cpu, i can use service fabric to scale up my notes. What is the equivalent of that on the mesh side?>> One thing we built into the current release of service Fabric, the 6.2 Release, there is two things of scaling. There is using vm scale set to auto scale the infrastructure. In the release we just did in 6.2, There is also auto scaling Of the application itself. Is that what you’re asking about?>> right.>> The rules we have built in now to auto scale the actual Service inside service fabric, look at the current 6.2 Release. You can go into your manifest. You can set up what is called — You can set up a threshold for your service, like upper and Lower thresholds and scale them out. They deploy on to your actual Service itself, so you can say both — i think it is on cpu and On memory, for the actual Service. You can say if the amount of Memory that this particular service goes over and above this Threshold amount, add this amount, this number of of resources. The 6.2 Release has amazing scale out capabilities at the Scale out level built into the platform. We take advantage of that now.So On the mesh sides of things we cluster. You don’t see that. You also scale of the Application will be done as with we saw here. The 6.2 Release has full auto scale capabilities with upper And lower thresholds an how much you scale in. We have a whole document written about that.>> But that is the difference. So you were talking about when You need to scale out today you write a script that tells the vm Skillset underneath to add more vms. In this world you wouldn’t do that because you don’t scale out Vms. All you have to do is say i want The application to scale out. One of the resource types that We can then introduce is a scale out rule, so the same way you Deploy a network resource or routing rule resource you can Deploy a scale out rule resource that will instruct the system When you see these threshold hits, scale out this Application, but you don’t have to do anything with Infrastructure in this case because we manage that. It should be simpler. It should be another resource You deploy that says if you see a cpu go above 60 for my Example, add more instances until it goes –>> so rather than managing the Vm, now i’m managing at the service level and each service Knows how much the person — like, the cpu versus memory.>> You are just managing the application, none of the vms underneath. You are telling the system scale this system according to my Rules but don’t think about the vms.>> All right, thank you.>> We are going to take one more question.>> My question is more about the data. Are we getting, in the future, to have an explorer to see Reliable collections?>> data explorer.>> We are still in the process of building a data explorer for Reliable collections and you will see one Coming, yes. We found out we have to build it In a more generic way. We are trying to make it work With a more variety of applications which is why it is Taking longer. You want to be able to see the Reliable collections, that is on our roadmap. Good question. We see that around the summertime.>> That comes with apis, right?>> yes. Okay, we are going to call it Done at this point and then we’ll — thank you.

2 Replies to “Azure Service Fabric: The road ahead for microservices  : Build 2018”

  1. The bouncing and then rotating triangles demo is super compelling at

Leave a Reply

Your email address will not be published. Required fields are marked *