Qudos .NET Meetup Newcastle - May 2024
Real World Cost Management Journey - Mike Stephenson
Mike has been working in the Azure space since it has been in preview and worked with Azure ever since and is a Microsoft MVP and is a product owner for turbo360.
Transformation
Long-term integration programme, multi-projects, 500 logic apps per environment, other resource types such as APIM, Service Bus etc with applications such as SAP Hana, Dynamics CRM. Think about what you spend on Azure and what things were done to save money. Customers have been implementing Azure for a long time, is about digital transformation and empower users to solve problems but also need to think about how it is cost sustainable and can you get good value for money or not.
How do you manage cost?
Who is accountable or responsible for cost management? What are the best cost savings you have found? How often do your teams review costs? Where do you feel your biggest risks are in your Azure spend? 30% waste in cloud spend is average for the companies they have seen. People will access to billing data don't know about the solutions and those who build the solutions don't have access to the billing data, people focus on their part and don't think about the costs. If you only have spent a little as a startup but there may be risk to say how much would you have to spend to bankrupt the company which could be something you spin up on Azure.
Democratise cost data
Change way you work to be more cost focused and spread this out from specific teams and allow people to closely look at costs and have evidence to back up any cost savings. Need people to see the costs who build the systems is important such as integration team but may have one company with one subscription per team or each team has multiple subscriptions or just a few subscriptions with many resource groups so if don't have a standard then can be hard to model that information. Need to dig deeper into own costs such as production vs development costs and can you get to a point where a customer with a specific integration cost a certain amount of money. Can you create easy graphs to show what the team cares about and show all resources across all environments per month and do lower-level analysis on the information.
Cost monitoring
If break a cost you want to have an alert so you can have a budget, but alerts can take time to set up but can have an integration team budget per day and if this is broken get an alert then can set ones up for development or production. Also, can have cost anomaly detection to show something is costing a lot more that it usually does. It could be someone has misconfigured something which may cost more money, or identify when may need to change any plans for costs. If something isn't right then can get a notification on Microsoft Teams and can manage this like a bug or SLA-driven support incident so you can address it as soon as possible, don't want it going on for days and days as will cost even more money.
Cost Reviews
You may have hundreds of lines of billing data, you want to know where do you look for spend so can show green ones that are good and red that are bad so can look at what has changed the most from a cost perspective and do a cost review to provide governance to the teams. You can see why things have changed and can ask the teams who can provide this using the data. You can easily see the cost differences and you could also get re-architecture ideas from cost reviews.
Workload Optimisation
Can you take a resource and change the plan or size based on a schedule where can see it is green and turned on or red you can turn it off. Turn things off when they are not being used such as out of hours, so can turn off things like logic apps. You could have dev, test, UAT environment and can turn these off could save $1,000 per month just turning them off when they are not needed. When turn things on in production those costs can increase but when taking things offline in other environments those costs can be reduced.
Rightsizing
Can you take a VM and see if it is scaled appropriately for what it does. May look at something that was provisioned as a larger instance where it could be smaller and save thousands just on a single VM. There are so many things in Azure so this could be difficult so can have the rightsizing scoped to the teams and look at the biggest instances and see if can tune that to save money, could scale down app service plans, SQL databases could be scaled down, or generally scale down where needed to save money. You still need to understand what it is used for and need to understand if it needs to be a certain size for some specific reason, so need to understand the whole context to make a decision using Turbo360.
Reservations
You make a commitment to a size of VM for a long period of time such as one to three years as Microsoft will give you a better prize as this will give predictable demands. Lots of customers don't know that reservations exist as this is a bit hidden away, you can see which reservations in Turbo360 applies to the integration team. You are better rightsizing before you reserve an instance or could do pay as you go and turn it off when not using it.
Storage Defender Costs
Costs for functions and logic apps, identify what was being paid for that these have storage behind these apps which was a cost for Defender on those as it may be turned on by default or someone turned it on but are paying Defender costs on these and pay per transaction and these apps were doing 1.6 million transactions per day and $6912 per year was a cost that was happening all of sudden and wasn't noticed due to this being turned on. If you don't need Defender, you can turn it off or can change price plan from transaction based to cost per month instead to reduce costs to $100.
Continuous Improvement & Results
Sit, crawl, walk run. People will look at an Azure bill but won't do any official about it and Cost Optimisation Cycle is Analyse, Monitor and Optimise. There was over $53,000 of cost savings from everything combined. At the end of the year there is a review to say switch to another cloud provider but the same thing happens there. Talk to the integration team about what they did and replicate what was done there in other teams, look at costs on a departmental basis but there are many different ways to do cost so want to do relationships between those who care about costs and who is managing the resources and implement a FinOps strategy.
SQL Savings
Optimise low use SQL, the problem was environment was build and data team worked on that but they didn't know what they were spending and whether they were getting value for money. If use a serverless database in the right way it won't cost much but there is app CPU billed and in this database there was parameters in this so the database should hardly be used but something was checking it all the time, if you ping it all the time it never goes to sleep, but if you leave it along it will shut down. It can be set back to DTU and provisioned by changing that and the costs were lower, and the database performed better and saved over $15,000 per year.
Overprovisioned SQL, when the databases were deployed they were setup for best practices for the UAT databases with the highest SLAs to be business critical, but it doesn't need to be business critical but set to standard will halve the costs immediately also don't need as many cores so can scale down and back up if needed and can also use scheduler to move the number of cores down and scale up as needed so get from $4000 to $800 per month and save $1,000,000 per year in total.
OMG moment - Cost anomaly massive database, massive spike in cost of the database costs with a $35,000 per month database having been created, a third-party company was working with customer and was doing migration so wanted to do it as quickly as possible, so they ramped it up as big as possible, want to stop people doing that unless there is an approval but for this saved $35,000 per month.
Circuit Breaker - think about operations with an event driven system with files that come in every day into FTP that triggers a logic app to load the data into a SQL database, it only loads a few files today and is a low cost solution but there was a problem when an new archiving process was added and was putting files on the FTP system and was triggering the process, those files were invalid and this was adding more and more files each time. The cost was scaling larger and larger when these files were being loaded and was costing thousands. Cost data you will be 24 hours behind in most cases, so if you might want to do if have a cost solution and may be attacked by mistake or deliberately and have consumption costs that will scale can have a circuit breaker with Turbo360 you can have it check how many runs an app has then disable it if it goes over a certain amount executions, if it is turned back on again if the problem still is happening it will turn off again.
Tuning integrations - for example getting whole files instead of deltas for changing and the logic apps will process the whole file and duplicate data that hasn't changed is being processed, but there are only a few records that have changed so can get rid of the duplicate parts before it is processed, can you make a development change to get those savings by using a function to remove unchanged records by comparing the two files and get rid of records changed and then will only process the few records needed to be processed and this will save money.
Conclusion
Top-down vs bottom up, be a cost champion - team with best savings get a treat to celebrate their success, share information about who is saving company money, integration people are excellently placed and spend half an hour per month reviewing your costs. Sustainable costs will be one of your top priorities if they are not already. Look into FinOps and cloud unit economics compare the costs with how much money it makes, if you drop the costs and get rid of waste then can make more money and build more features to make even more money. Understanding finance and business concepts as a developer can be something need to better understand and measure, product owner should know this information about costs already. 25% - 40% of cloud spend is waste, how much could you save? There are whitepapers on turbo360.com/whitepapers such as “The Important of accountability in FinOps” and “Unlocking Azure Cost Efficiency: A Comprehensive Comparison”. There is also a Podcast “FinOps on Azure” which is hosted by Mike where he digs deeper into the more technical aspects and has spoken to CosmosDB as it can be hard to do it in a cost-effective manner. FinOps helps keep DevOps honest but having senior stakeholder buy in is essential or it won't happen.
Dotnet Interop - Adam Parker
Adam is a software developer and has been using .NET for most of their career. Notable Achievements are creating the talk and being here to talk about .NET Interop.
What is Interop?
Interoperability allows you to take advantage of other languages inside and outside the CLR including managed and unmanaged code or using libraries from other languages in your project.
Visual Basic and C#
VB and C# are both object-oriented languages but they both compile down to IL and will work together with no issues and can use these in Console applications and other usages. You can instantiate them together flawlessly.
F#
Is a functional programming language and supports discriminated unions, options, higher order functions, modules, records and sequences.
Where things go well and work are Namespaces and Modules and can use using statements just like F# but Modules are exposed as a Static Class to VB or C#. Values and Records, values ae exposed as a static field on a class as a public static member. Records generate a sealed read-only class with only a single full parameter constructor and have been in F# since the start, you will use this class much like a record in C# but even today it does not get converted to a C# record but still remains a class. Sequences, Arrays and Lists with sequences are lazy loaded lists, arrays work as expected and sequences get exposed as IEnumerable. Functions will work as long as they don't accept other functions as parameters or return a function as the return type. Discriminated unions are a type that has a certain set of subtypes and guaranteed to use every single subtype and only have numbers it will compile to an enum for use in C#, if Discriminated Union uses types, then an abstract class of the first type which has methods for creating subtypes is created for each type which will be a class that inherit the abstract type. Unit, as F# doesn't have concept of null so if a function returns Unit, it gets compiled down to void and if used as a parameter it will use null.
Things that work a bit. Lists in F# are different to C# lists and may cause confusion but can import FsharpList
Things that should work. Functions - if a function takes another function, then we have problems, you need to pass in functions as an FSharpFunction to convert a C# function to the F# function type as the F# functions have not been mapped to C# functions for some reason.
What's the point?
With VB and C# you can use libraries from one library in another language such as using NewtonSoft.Json in F# and allows for that broader ecosystem to exist. Functional code and imperative shell so make unrepresentable states impossible to represent. You may have a very concrete domain you know such as a Bank account being Frozen, Open etc and have the state be guaranteed with functional programming but it doesn't like things being changed as everything is immutable so can have core domain logic that is certain and secure in F# and have the outer layer in C# such as ASP.NET and pass that on to F#, the concept of Functional Core and Imperative Shell is quite interesting. You could use a facade to make a library in another language to make it easier to use.
Resources