Welcome to Create Impact, a new series from Aviture focused on the topics that inspire our engineers to innovate. In each article, an Aviture team member will take you on a deep dive into a subject they’re passionate about, showing you the thinking behind cutting-edge engineering advances, the latest UX trends, development theories, and other unique topics that enable Aviturians to embrace the Art of the Possible for our clients.
In this post, Chief Engineer Brandon Suponchick takes us through the biggest highlights of AWS re:Invent, the annual event from the world’s leading cloud provider. Brandon explores how things like private 5G networks and serverless data warehouses will transform cloud computing in 2022 and beyond.
This December, AWS held its annual re:Invent expo, and many Aviturians, myself included, were among the first to catch a glimpse at the exciting tech innovations we’ll soon be able to deploy for our customers.
It was hard to keep the list of eye-openings announcement to just 3 standouts, but I feel like the following highlights really showcase the incredible work being done in the cloud computing space. Our team already has ideas for how we can implement these here at Aviture.
These might not be the biggest announcements, but I think they could wind up being some of the most impactful.
Here, then, are the top 3 highlights of AWS re:Invent 2021.
This one took my head for a spin.
AWS will begin enabling companies to set up their own private 5G networks on their existing infrastructure. They're providing everything you need to essentially arm all your employees with their own company phones and tablets that can tap into a 5G-capable private network that’s yours to manage and maintain. AWS will provide the equipment needed to tie into the existing 5G global infrastructure, and you'll have your own corporate 5G network.
If you have a salesman who needs access to your on-premises infrastructure, they can have a phone or an iPad with a SIM card that ties directly to that private infrastructure, utilizing all the speed and performance of 5G out in the field. No more trying to find WiFi hotspots, because you’ve set up an enclave of your own.
I think there's a lot of potential for customers who are in a scenario where they’ve got technicians, sales teams, or anybody out in the field who needs quick access to a network. Rather than being forced to establish a VPN from their phone on their own self-paid plan, you can have corporate devices that are fast and secure thanks to AWS’s 5G technology.
Now, this is a very fresh announcement and there's not a lot of information out there, so we’ll file this one under “Stay tuned.” Nevertheless, I’m excited about the possibilities.
The announcement of a serverless option for Redshift is really impactful for our customers (I can already think of one in particular I can’t wait to share this with).
Redshift is the data warehouse solution native to AWS. Typically, you're ingesting that data, you're cleansing it, you're transforming it, you're getting it prepped for business intelligence and machine learning processes.
Data warehouses deal with gigantic amounts of data. You’re processing and storing it on what are called clusters — think of them as gigantic combinations of servers. Usually, you pay AWS to support a workload that can handle the largest amount of traffic you could ever imagine on that Redshift cluster.
It's a very on-prem style-problem for a cloud-hosted offering. You must be precise and understand your data usage so you can figure out the right size for that cluster. And that usually translates into a much larger solution than your actual needs. You might be using 15% of your cluster’s capacity most of the time, and then you have these bursts of activity where you need that additional capacity.
With the serverless offering, you're able to push all the data into the warehouse, paying the normal transfer fees just as you always would. But if you're not querying against that data, or you're not ingesting new data in, you pay no more than your specified minimum capacity. Redshift will then automatically scale up and down based upon the incoming request load. You pay for the 15%, not the other 85% until you actually need the full 100% capacity.
This will be great for customers who, for instance, do a nightly data ingestion from a variety of different data services. Many companies don’t need to actively query their data warehouse at all times, or they might have expensive data ingest processes that run overnight with little other activity. This would offer those organizations a significant decline in cost because they’re only paying the additional capacity during that short overnight process rather than paying for that capacity 24/7.
I'm a big serverless guy. Whenever I see anything that comes in with this type of an elastic offering, I get really excited about it, so I’m looking forward to showing our customers what this could do for them.
The Cloud Development Kit, or CDK, is something Amazon released several years ago as their next-generation infrastructure-as-code solution.
Traditionally, what they've provided for many years is a tool called CloudFormation. You can describe the infrastructure you want to run in the cloud utilizing a markup language, and CloudFormation will recreate your intended infrastructure by interpreting one of these markup files and spinning up the resources you need in an automated fashion.
This all changed with the introduction of the CDK. AWS saw a rise in third-party libraries that addressed some of the downfalls of CloudFormation, so they decided to provide their own next-gen infrastructure-as-code solution natively to their platform. With the CDK, you can create re-usable components (called constructs) in the same language that a lot of developers use every day, be it be JavaScript, C#, or one of several other supported options. These constructs often replace thousands of lines of YAML and turn it into tens of lines of code. It’s an amazing time saver, and really takes a lot of the frustration out of an industry best practice.
Despite these benefits, the CDK still had its problems. One of the biggest barriers of entry for our teams was simply ensuring you were pulling together all the parts of the CDK you needed to define your infrastructure. Every time you sought to utilize a new type of resource, say a Lambda function or a Security Group, you had to import a new dependency from CDK. For anything reasonably complex, this was a huge time sink trying to remember which package each type of resource was housed within.
With version two, they've made this problem much easier. They now have a single dependency for you to pull in that contains all the AWS-provided CDK constructs you might want to use in one go. They also introduced semantic versioning of resources, which is something we do from a code perspective to indicate what types of changes we're introducing into our products. If we're introducing a breaking change, there's now a way to semantically define this change through the cloud development kit in the same way we’re used to for the rest of our codebases.
At Aviture, we're getting really heavy into CDK right now, so it's exciting to see them address a couple of the concerns that we've had.
These are the top three takeaways I had from AWS re:Invent, but they’re only the beginning. AWS also announced many other technologies and services that will improve cloud development in the coming years, including new iterations of their popular CloudWatch monitoring applications and a newfound dedication to sustainability.
If you have ideas about how your own infrastructure can benefit from AWS, not to mention an AWS-certified engineering team that can steer you toward a successful implementation, don’t hesitate to reach out. I can’t wait to bring these ideas to life for our customers, and I’d love to do the same for you too.