Items tagged with: AWS
ponen los ejemplos de mapfre, securitas direct e incluso openbank, que lo están migrando todo a la nube.. 😱
Then… If, like me, you’re over 40 and work in IT, you’ll probably remember a time when everyone used Windows, and a small but growing proportion of people were wasting their lives…
Article word count: 2235
HN Discussion: https://news.ycombinator.com/item?id=19489916
Posted by puzza007 (karma: 228)
Post stats: Points: 159 - Comments: 94 - 2019-03-26T07:45:55Z
#HackerNews #aws #k8s #linux #new #the #windows
If, like me, you’re over 40 and work in IT, you’ll probably remember a time when everyone used Windows, and a small but growing proportion of people were wasting their lives compiling Linux in their spare time.
The Windows users would look on, baffled: ‘Why would you do that, when Windows has everything you need, is supported, and is so easy to use?!’
Answers to this question varied. Some liked to tinker, some wanted an OS to be ‘free’, some wanted more control over their software, some wanted a faster system, but all had some niche reason to justify the effort.
As I stayed up for another late night trying to get some new Kubernetes add-on to work as documented, it struck me that I’m in a similar place to those days. Until a couple of years ago, Kubernetes itself was a messy horror-show for the uninitiated, with regularly-changing APIs, poor documentation if you tried to build yourself, and all the characteristics you might expect of an immature large-scale software project.
That said, Kubernetes’ governance was and is far and away ahead of most open source software projects, but the feeling then was similar to compiling Linux at the turn of the century, or dealing with your laptop crashing 50% of the time you unplugged a USB cable (yes, kids, this used to happen).
It’s not like confusion and rate of change has come down to a low level. Even those motivated to keep up struggle with the rate of change in the ecosystem, and new well-funded technologies pop up every few months that are hard to explain to others.
Take knative for example:
The first rule of the knative club is you cannot explain what knative is — Ivan Pedrazas (@ipedrazas) March 18, 2019
So my AWS-using comrades see me breaking sweat on the regular and ask ‘why would you do that, when AWS has everything you need, is supported and used by everyone, and is so easy to use!?’
AWS is Windows
Like Windows, AWS is a product. It’s not flexible, its behaviour is reliable. The APIs are well defined, the KPIs are good enough to be useful for most ‘real’ workloads. There are limits on all sorts of resources that help define what you can and can’t achieve.
Most people want this, like most people want a car that runs and doesn’t need to be fixed often. Some people like to maintain cars. Some companies retain mechanics to maintain a fleet of cars, because it’s cheaper at scale. In the same way, some orgs get to the point where they could see benefits from building their own data centres again. Think Facebook, or for a full switcher, Dropbox. (We’ll get back to this).
Like Microsoft, (and now Google) AWS embraces and extends, throwing more and more products out there as soon as they become perceived as profitable.
AWS and Kubernetes
Which brings us to AWS’s relationship with Kubernetes. It’s no secret that AWS doesn’t see the point of it. They already have ECS, which is an ugly hulking brute of a product that makes perfect sense if you are heavily bought into AWS in the first place.
But there’s EKS, I hear you say. Yes, there is. I haven’t looked at it lately, but it took a long time to come, and when it did come it was not exactly feature rich. It felt like one cloud framework (AWS) had mated with another (K8s) and a difficult adolescent dropped out. Complaints continue of deployment ‘taking too long’, for example.
Finally taking AWSʼs EKS for a spin. While Iʼm bias for sure, this is not what I expect from a managed Kubernetes offering. Itʼs been 10 minutes and Iʼm still waiting for the control plane to come up before I can create nodes through a separate workflow. pic.twitter.com/NIJSZGp2Hc — Kelsey Hightower (@kelseyhightower) January 30, 2019
Like Microsoft and Linux, AWS ignored Kubernetes for as long as it could, and like Microsoft, AWS has been forced to ’embrace and extend’ its rival to protect its market share. I’ve been in meetings with AWS folk who express mystification at why we’d want to use EKS when ECS is available.
EKS and Lock-in
Which brings us to one of the big reasons AWS was able to deliver EKS, thereby ’embracing’ Kubernetes: IAM.
EKS (like all AWS services) is heavily integrated with AWS IAM. As most people know, IAM is the true source of AWS lock-in (and Lambda is the lock-in technology par excellence. You can’t move a server if there are none you can see).
Shifting your identity management is pretty much the last thing any organisation wants to do. Asking your CTO to argue for a fundamental change to a core security system with less than zero benefit to the business in the near term and lots of risk is not a career-enhancing move.
On the other hand, similar arguments were put forward for why Linux would never threaten Windows, and while that’s true on the desktop, the advent of the phone and the Mac has reduced Windows to a secondary player in the consumer computing market. Just look at their failure to force their browsers onto people in the last 10 years.
So it only takes a few unexpected turns in the market for something else to gain momentum and knife the king of the hill. Microsoft know this, and AWS know this. It’s why Microsoft and AWS kept adding new products and features to their offering, and it’s why EKS had to come.
Microsoft eventually turned their oil tanker towards the cloud, going big on open source, and Linux and Docker, and all the things that would drag IT to their services. Oh, and you can use the same AD as your corporate network, and shift your Microsoft Windows licenses to the cloud. And the first one’s free. Microsoft don’t care about the OS anymore. Nobody does, not even RedHat, a business built around supporting a rival OS to Windows. The OS is dead, a commodity providing less and less surplus value.
Will Kubernetes force AWS to move their oil tanker towards Kubernetes? Can we expect to see them embrace Istio and Knative and whichever frameworks come after fully into their offering? (I don’t count how–to guides in their blogs).
AWS’ Competition and Cost
I don’t know. But here’s some more reasons why it might.
Like Microsoft in the heyday of Windows OS, AWS has only one competitor: the private data centre. And like Microsoft’s competitor then (Linux), adoption of that competitor is painful, expensive and risky to adopt.
But what is the OS of that data centre? Before Kubernetes the answer would have been OpenStack. OpenStack is widely regarded as a failure, but in my experience it’s alive (if not kicking) in larger organisations. I’m not an OpenStack expert, but as far as I can tell, it couldn’t cover all the ground required to become a stable product across all the infra it needed to run on and be a commodity product. Again, this is something Microsoft ruled at back in the day: you could run it on ‘any’ PC and ‘any’ hardware and it would ‘just work’. Apple fought this by limiting and controlling the hardware (and making a tidy profit in the process). Linux had such community support that it eventually covered the ground it needed to to be useful enough for its use case.
OpenStack hasn’t got there, and tried to do too much, but it’s embedded enough that it has become the default base of a Kubernetes installation for those organisations that don’t want to tie into a cloud provider.
Interestingly, the reasons AWS put forward for why private clouds fail will be just as true for themselves: enterprises can’t manage elastic demand properly, whether it’s in their own data centre or when they’re paying someone else. Command and control financial governance structures just aren’t changing overnight to suit an agile provisioning model. (As an aside, if you want to transform IT in an enterprise, start with finance. If you can crack that, you’ve a chance to succeed with sec and controls functions. If you don’t know why it’s important to start with finance, you’ll definitely fail).
But enterprises have other reasons not to go all in on AWS: lock-in (see above) and economies of scale. We’ve already referenced Dropbox’s move from AWS to their own DC’s.
There’s an interesting parallel here with my experience of cloud services. Personally, I have found that cloud storage, despite its obvious benefits, still doesn’t work out cheaper (yes, even if I include my own labour, and redundancy requirements) by quite some margin for my own data. Why is this? Well, for several reasons:
* I have the expertise and ability to design a solution that reduces labour cost * Depreciation on spinning disks is very low (especially if you buy >2), and access speed is high * I have enough data to store that the linear cloud cost starts to look expensive
These reasons (expertise, asset value, and economies of data scale) are some of the reasons why large orgs would do the same thing. Here’s an unscientific graph that expresses this:
Red line = cost of running Kubernetes
The zero-day cost of running Kubernetes is very high (red line on the left), but the value increases exponentially as you scale up the service. This is why AWS makes so much money: the value to you as the user is massively greater than the cost for as long as its non-linear nature isn’t revealed to you. Put bluntly: if you get big enough, then AWS starts screwing you, but you might not care, since your business is scaling. You’re a frog, boiling in the kettle. If and when you realise where you are, it’s too late – getting out is going to be very very hard.
AWS and the ‘What if Bezos Loses His Mind?’ Factor
Linux only really got going when large companies got behind it. Similarly, Kubernetes has had significant funding from the start from two big players: Google and RedHat.
What’s going to really move the needle is if organisations take seriously AWS’s monopoly problem. Some have to take it seriously, because there are regulatory requirements to have plans to move somehow within a reasonable timeframe should Bezos lose his mind, or Amazon becomes riddled with Russian spies. Other reasons are that different cloud providers have different strengths, and large orgs are more likely to straddle providers as time goes on.
If enough organisations do that, then there’s little that AWS can do to counter the threat.
With Microsoft there was no alternative but to pay their tax if you wanted the software, but with Linux you really aren’t truly locked in to one provider. I’ve seen large orgs play chicken with RedHat during negotiations and put serious money into investigating using CentOS instead.
The same thing is happening with Kubernetes as happened with Linux. We’re already seeing Kubernetes adopt the ‘distro’ model of Linux, where a curated version of the platform is created as an easier to consume ‘flavour’. Early on there was RedHat’s OpenShift, which has since renamed itself ‘OKD‘ (OpenShift Kubernetes Distribution, I assume).
Some orgs will pay the tax of having a large monopolistic supporter of Kubernetes run the show, but (as with Linux) there will always be the option of switching to in-house support, or another provider, because the core system isn’t owned by anyone.
Kubernetes is big enough and independent enough to survive on its own.
Look at OpenShift, and how it avoided accusations of being a Kubernetes fork. Whatever the legal arguments, RedHat’s protestations were not disingenuous – they know not only that money can be made on top of Open Source infrastructure, but that they benefit from its success too. They don’t need to fork Kubernetes. Interestingly, they did fork Docker, even before the OCI fork, and with good reason, as Docker were making decisions clearly designed for their own survival (hard-coded default registry being Docker’s own for reasons of ‘consistency’, for example).
Kubernetes doesn’t have this problem. I’ve not heard of any vendor pushing their own interests over others at the cost of anyone else into the codebase.
What does worry me (and others) is this:
[IMG]Cloud Native Computing Foundation ‘Landscape’: there will be a test….
Like Linux, there are a bewildering array of technologies sitting in ‘userland’ in various states of maturity and community acceptance, most of which likely will be out of date in a couple of years. I can barely remember what the various tools in logging do, let alone span the whole graph like an architect is supposed to.
If I’m using AWS I’m looking at that, thinking: what a headache! You may as well try and get to the bottom of sound in Linux, or consider all the options when deciding on a Linux desktop (45!).
My original thesis was that AWS is the new Windows to Kubernetes’ Linux. If that’s the case, the industry better hurry up with its distro management if it’s not going to go the way of OpenStack.
Or to put it another way: where is the data centre’s Debian? Ubuntu?
If you liked this post, you might also like:
If you like this, you might like one of my books:
Learn Bash the Hard Way
Learn Git the Hard Way
Learn Terraform the Hard Way
HackerNewsBot debug: Calculated post rank: 137 - Loop: 261 - Rank min: 100 - Author rank: 207
Welcome to the world’s first global autonomous racing league, open to anyone. It’s time to race for prizes, glory, and a chance to advance to the AWS DeepRacer Championship Cup at re:Invent 2019 to…
Article word count: 414
HN Discussion: https://news.ycombinator.com/item?id=19447923
Posted by jeffbarr (karma: 13109)
Post stats: Points: 108 - Comments: 24 - 2019-03-20T23:11:51Z
#HackerNews #aws #deepracer #league
DeepRacer League logo
Welcome to the world’s first global autonomous racing league, open to anyone. It’s time to race for prizes, glory, and a chance to advance to the AWS DeepRacer Championship Cup at re:Invent 2019 to win the coveted AWS DeepRacer Cup. Get on the track to compete in the live events at 20 AWS Summits worldwide or enter the monthly virtual races.
Summit Circuit: Find a Race
Virtual Circuit: Preview Sign Up
Join us at any of the 20 AWS Summits, globally, where we will help you build and train a model at a workshop, or you can bring one you have trained at home. You can then put your model to the test and compete on the track in the AWS Summit Expo.
Developers can also build models and compete online in the virtual league through the AWS DeepRacer console. The virtual races will take place monthly on new, increasingly challenging tracks and are open to all levels of expertise. Sign up for the preview to get on the list for early access.
Get started with machine learning
Whether you are new to machine learning or ready to build on your existing skills, we can help you get ready to race. Developers with no prior machine learning experience can get started by watching this Tech Talk to get familiar with the basics of reinforcement learning (a branch of machine learning thatʼs ideal for training autonomous vehicles) and AWS DeepRacer. If you are already comfortable with these concepts and ready to get hands-on today, you can dive in and build an AWS DeepRacer model using the Amazon SageMaker RL notebook.
Top the leaderboard at one of the 20 summit races or virtual monthly circuits and you’ll be heading on an expenses paid trip to AWS re:Invent in Las Vegas. You’ll compete in the 2019 AWS DeepRacer Championship Cup, where the racer with the fastest time will become the overall 2019 AWS DeepRacer League Champion. Developers, the race is on!
Check out the AWS DeepRacer League action so far
Developers, start your engines for the 2019 AWS DeepRacer League
AWS DeepRacer Highlights from AWS re:Invent 2018
Learn About Winning with ML From The 2018 AWS DeepRacer Cup Champion!
2018 AWS DeepRacer Championship Cup - Final Race
Information on AWS DeepRacer pricing and integration with other AWS services.
AWS customers can sign-up for the AWS DeepRacer preview.
Pre-order your AWS DeepRacer
Get hands-on with RL, experiment, and learn through autonomous driving.
HackerNewsBot debug: Calculated post rank: 80 - Loop: 131 - Rank min: 80 - Author rank: 145
#AdaCore AWS github repository is here: https://github.com/AdaCore/aws
Usually, the steps for compiling #Ada #AWS are like these:
- Clone the AWS repository.
- Edit the makefile.conf
- Do make setup.
- Do make.
- Install with make install.
I'll write the important things to consider for compiling AWS in #Manjaro successfully.
For the CPATH, consider to set the GCC headers that are not in the /usr/include folder. In my case, I have GCC version 8.2.1 and an x86_64 GNU Linux installation, so I the set command is:
set -U CPATH /usr/lib/gcc/x86_64-pc-linux-gnu/8.2.1/include/ /usr/lib/gcc/x86_64-pc-linux-gnu/8.2.1/include-fixed/ /usr/lib/gcc/x86_64-pc-linux-gnu/8.2.1/install-tools/include/
Cloning the Repository
Clone with recursive submodules!
git clone --depth 1 --recurse-submodules https://github.com/AdaCore/aws.git
The makefile.conf file
For starts, try to compile with the minimum. Set everything to false, except ZLIB and DEBUG. Try SOCKET with std value, NETLIB with ipv4, DEFAULT_LIBRARY_TYPE with static and set the prefix variable with your Ada directory (if not the /usr path).
make setupeach time you edit this file.
To sum up, my makefile.conf looks like this:
TARGET = $(shell gcc -dumpmachine) prefix= YOUR PREFIX PATH HERE ENABLE_SHARED=$(shell $(GNAT) make -c -q -p \ -XTARGET=$(TARGET) -XPRJ_TARGET=$(PRJ_TARGET) \ -Pconfig/setup/test_shared 2>/dev/null && echo "true") DEFAULT_LIBRARY_TYPE = static XMLADA = false ASIS = false ZLIB = true NETLIB = ipv4 SOCKET = std LDAP = false DEBUG = true PROCESSORS = A NUMBER OF PROCESSOR YOU HAVE HERE CP = cp -p GNAT = gnat GPRBUILD = gprbuild GPRCLEAN = gprclean GPRINSTALL = gprinstall GPS = gps MKDIR = mkdir -p PYTHON = python RM = rm SED = sed PRJ_TARGET=UNIX OTHER_LIBRARY_TYPE = \ $(if $(filter-out static,$(DEFAULT_LIBRARY_TYPE)),static,relocatable) ifeq ($(TARGET), $(shell gcc -dumpmachine)) IS_CROSS=false GCC= gcc else IS_CROSS=true GCC = $(TARGET)-gcc endif
I modified some variables that supposedly would be automatically assigned to the correct value. Also, zlib library has been installed in the system with pacman.
Just continue as usual:
make setup make make install
At AWS, we focus on solving problems for customers. Over the years, customer usage and dependencies on open source technologies have been steadily increasing; this is why we’ve long been committed to…
HN Discussion: https://news.ycombinator.com/item?id=19363961
Posted by dy (karma: 1086)
Post stats: Points: 99 - Comments: 82 - 2019-03-11T23:34:44Z
#HackerNews #amazon #aws #distro #due #elastic #for #issues #licensing #maintain #open #search
photo by Adrian Cockcroft taken at Petra March 10, 2019.
At AWS, we focus on solving problems for customers. Over the years, customer usage and dependencies on open source technologies have been steadily increasing; this is why we’ve long been committed to open source, and our pace of contributions to open source projects – both our own and others’ – continues to accelerate.
When AWS launches a service based on an open source project, we are making a long-term commitment to support our customers. We contribute bug fixes, security, scalability, performance, and feature enhancements back to the community. For example, we have been a significant contributor to Apache Lucene, which powers Amazon Elasticsearch Service. The Amazon EMR team has been making contributions to the Hadoop ecosystem for many years, and the Amazon Elastic Container Service for Kubernetes (EKS) team has been contributing to Kubernetes. We also invest in open source communities, training developers and operators, and sponsor open source events and conferences such as ApacheCon and KubeCon, and recently increased our support of the Apache Software Foundation. Marketing support helps communities by growing the number of end users and contributors, and accelerates the adoption of open source projects.
Many reasons drive our active participation in open source communities: First, it’s important to support healthy communities so that projects continue to develop and stay relevant. Second, maintaining an internal forked version of a project causes extra wasted effort, and can delay releasing updates to services as merges are made. Third, releasing new ideas as open source gathers others around the ideas to help move them into the mainstream. Fourth, open source collaboration across companies and academic institutions has produced some of the most significant breakthroughs in areas like Artificial Intelligence.
To get these benefits, customers must be able to trust that open source projects stay open. The maintainers of open source projects have the responsibility of keeping the source distribution open to everyone and not changing the rules midstream. When important open source projects that AWS and our customers depend on begin restricting access, changing licensing terms, or intermingling open source and proprietary software, we will invest to sustain the open source project and community. For example, recently there was increased concern from our customers that Oracle would stop supporting the version of Java that customers relied upon, or change the licensing terms, and customers had good reason to be concerned. We responded by offering the Corretto project, a no-cost, multi-platform, production-ready distribution of OpenJDK from Amazon. We invested to provide long-term consistency and confidence by committing that Amazon will distribute security updates to Corretto 8 at no cost until at least June, 2023, and to Corretto 11 until at least August, 2024. Corretto is a free, supported distribution that the community can now depend on while in parallel we continue to support and make contributions directly to OpenJDK.
Unfortunately, we are seeing other examples where open source maintainers are muddying the waters between the open source community and the proprietary code they create to monetize the open source. At AWS, we believe that maintainers of an open source project have a responsibility to ensure that the primary open source distribution remains open and free of proprietary code so that the community can build on the project freely, and the distribution does not advantage any one company over another. This was part of the promise the maintainer made when they gained developers’ trust to adopt the software. When the core open source software is completely open for anyone to use and contribute to, the maintainer (and anyone else) can and should be able to build proprietary software to generate revenue. However, it should be kept separate from the open source distribution in order to not confuse downstream users, to maintain the ability for anyone to innovate on top of the open source project, and to not create ambiguity in the licensing of the software or restrict access to specific classes of users.
If we look closely at many successful open source projects, they have all benefited from access to unfettered open source software. In fact, arguably those projects would not exist today without an ability to quickly assemble and innovate on top of pre-existing open source software. For example, a significant enabler to Elasticsearch is the Apache Lucene project, an Apache Software Foundation project which predates Elasticsearch by 11 years. Elasticsearch also leverages many additional permissively licensed open source projects such as the Jackson project for JSON parsing, Netty as the web container, and many more. The point being that open source software enables individuals and businesses to innovate faster, and downstream consumers depend on that ability. When maintainers insert confusion regarding the long-term viability of the open source, it impacts all downstream consumers.
Elasticsearch has played a key role in democratizing analytics of machine-generated data. It has become increasingly central to the day-to-day productivity of developers, security analysts, and operations engineers worldwide. Its permissive Apache 2.0 license enabled it to gain adoption quickly and allowed unrestricted use of the software. Unfortunately, since June 2018, we have witnessed significant intermingling of proprietary code into the code base. While an Apache 2.0 licensed download is still available, there is an extreme lack of clarity as to what customers who care about open source are getting and what they can depend on. For example, neither release notes nor documentation make it clear what is open source and what is proprietary. Enterprise developers may inadvertently apply a fix or enhancement to the proprietary source code. This is hard to track and govern, could lead to breach of license, and could lead to immediate termination of rights (for both proprietary free and paid). Individual code commits also increasingly contain both open source and proprietary code, making it very difficult for developers who want to only work on open source to contribute and participate. In addition, the innovation focus has shifted from furthering the open source distribution to making the proprietary distribution popular. This means that the majority of new Elasticsearch users are now, in fact, running proprietary software. We have discussed our concerns with Elastic, the maintainers of Elasticsearch, including offering to dedicate significant resources to help support a community-driven, non-intermingled version of Elasticsearch. They have made it clear that they intend to continue on their current path.
Meanwhile, we have gotten feedback from customers and partners that these changes are concerning to them as well. It has created uncertainty about the longevity of the open source project as it is getting less innovation focus. Customers also want the freedom to run the software anywhere and self-support at any point in time if they need to. We have therefore decided to partner with others such as Expedia Group and Netflix to create a new open source distribution of Elasticsearch named “Open Distro for Elasticsearch.” Open Distro for Elasticsearch is a value-added distribution that is 100% open source, which will be focused on driving innovation with value-added features to ensure users have a feature-rich option that is fully open source.
“Open source software and the freedoms it provides are important to Expedia Group,” said Subbu Allamaraju, VP Cloud Architecture at Expedia Group. “We are excited about the Open Distro for Elasticsearch initiative, which aims to accelerate the feature set available to open source Elasticsearch users like us. This initiative also helps in reassuring our continued investment in the technology.”
“At Netflix, we are committed to open source. We are both major users and contributors to open source,” said Christian Kaiser, VP Platform Engineering at Netflix. “Open Distro for Elasticsearch will allow us to freely contribute to an Elasticsearch distribution, that we can be confident will remain open source and community-driven.”
As was the case with Java and OpenJDK, our intention is not to fork Elasticsearch, and we will be making contributions back to the Apache 2.0-licensed Elasticsearch upstream project as we develop add-on enhancements to the base open source software. In the first release, we will include many new advanced but completely open source features including encryption-in-transit, user authentication, detailed auditing, granular roles-based access control, event monitoring and alerting, deep performance analysis, and SQL support.
The new advanced features of Open Distro for Elasticsearch are all Apache 2.0 licensed. With the first release, our goal is to address many critical features missing from open source Elasticsearch, such as security, event monitoring and alerting, and SQL support. We think these features will be exciting and valuable to developers and will encourage them to download, collaborate, and ultimately, contribute to the community. Many of these features are ones that we have been working on for inclusion in Amazon Elasticsearch Service. Open Distro for Elasticsearch enables users to run the same feature-rich distribution anywhere they wish, such as on-premises, on laptops, or in the cloud.
Our aim for Open Distro for Elasticsearch is to provide developers with the freedom to contribute to open source value-added features on top of the Apache 2.0-licensed Elasticsearch upstream project. We plan to contribute patches to the open source Elasticsearch base back upstream for the benefit of all. Open Distro for Elasticsearch will welcome developers and contributors from across the industry to invest in these important technologies with the confidence that they will always remain open source and permissively licensed. The whole idea of open source is that multiple users and companies can put it to work and everyone can contribute to its improvement. Open Distro for Elasticsearch is consistent with our commitment to make the necessary investments to keep open source truly open and enable anyone to benefit from our contributions.
You can download, begin using, and contribute to Open Distro for Elasticsearch today. The security features available in this initial release include encryption-in-transit, native Active Directory, LDAP, and OpenID authentication, roles-based and granular access control, and audit logging. Other key features include integrated event monitoring and alerting that opens up the full flexibility of the Elasticsearch query language to notify you of changes in your data, SQL support including REST and JDBC support, and an advanced performance analyzer. To download and learn more about Open Distro for Elasticsearch, visit https://opendistro.github.io/for-elasticsearch/.
For more details, see Jeff Barr’s post New – Open Distro for Elasticsearch.
photo credit: taken by Adrian Cockcroft at Petra, March 10, 2019
HackerNewsBot debug: Calculated post rank: 93 - Loop: 125 - Rank min: 80 - Author rank: 60
A performance comparison between three different methods of deploying an API on AWS
Article word count: 1882
HN Discussion: https://news.ycombinator.com/item?id=19233466
Posted by abd12 (karma: 700)
Post stats: Points: 108 - Comments: 50 - 2019-02-23T14:38:32Z
#HackerNews #api #aws #comparison #containers #performance #serverless
In my last post, I showed how to connect AWS API Gateway directly to SNS using a service integration.
A few people asked me about the performance implications of this architecture.
Is it significantly faster than using a Lambda-based approach?
How does it compare to EC2 or ECS?
My answer: I don’t know! But I know how to find out (sort of).
In this post, we do a performance bake-off of three ways to deploy the same HTTP endpoint in AWS:
* Using an API Gateway service proxy * With the new hotness, AWS Lambda * With the old hotness, Docker containers on AWS Fargate
We’ll deploy our three services and throw 15,000 requests at each of them. Who will win?
If you’re impatient, skip here to see full results
Table of Contents:
Before we review the results, let’s set up the problem.
I wanted to keep our example as simple as possible so that the comparison is limited to the architecture itself rather than the application code. Further, I wanted an example that would work with the API Gateway service proxy so we could use it as a comparison as well.
I decided to set up a simple endpoint that receives an HTTP POST request and forwards the request payload into an AWS SNS topic.
Let’s take a look at the architecture and deployment methods for each of our three approaches.
Go Serverless with AWS Lambda
The first approach is to use AWS API Gateway and AWS Lambda. Our architecture will look like this:
SNS Publish with Lambda
A user will make an HTTP POST request to our endpoint, which will be handled by API Gateway. API Gateway will forward the request to our AWS Lambda function for processing. The Lambda function will send our request payload to the SNS topic before returning a response.
If you want to deploy this example, the code is available here. I use the Serverless Framework for deploying the architecture because I think it’s the easiest way to do it.*
*Full disclosure: I work for Serverless, Inc., creators of the Serverless Framework. Want to come work with me on awesome stuff? We’re hiring engineers. Please reach out if you have any interest.
Skipping the middleman with API Gateway service proxy
The second approach is similar to the first, but we remove Lambda from the equation. We use an API Gateway service proxy integration to publish directly to our SNS topic from API Gateway:
APIG Service Proxy
Before doing any testing, my hunch is that this will be faster than the previous method since we’re cutting out a network hop in the middle. Check below for full results. Note that API Gateway service proxies won’t work for all parts of your infrastructure, even if the performance is faster.
If you want additional details on how, when, and why to use this, check out my earlier post on using an API Gateway service proxy integration. It does a step-by-step walkthrough of setting up your first service proxy.
To deploy this example, there is a CloudFormation template here. This will let you quickly spin up the stack for testing.
Containerizing your workload with Docker and AWS Fargate
The final approach is to run our compute in Docker containers. There are a few different approaches for doing this on AWS, but I chose to use AWS Fargate.
The architecture will look as follows:
Fargate to SNS
Users will make HTTP POST requests to an HTTP endpoint, which will be handled by an Application Load Balancer (ALB). This ALB will forward requests to our Fargate container instances. The application on our Fargate container instances will forward the request payload to SNS.
With Fargate, you can run tasks or services. A task is a one-off container that will run until it dies or finishes execution. A service is a defined set of a certain number of instances of a task. Fargate will ensure the correct number of instances of your service are running.
We’ll use a service so that we can run a sufficient number of instances. Further, you can easily set up a load balancer for managing HTTP traffic across your service instances.
You can find code and instructions for deploying this architecture to Fargate here. I use the incredible fargate CLI tool, which makes it dead simple to go from Dockerfile to running container.
Now that we know our architecture, let’s jump into the bakeoff!
After I deployed all three of the architectures, I wanted to do testing in two phases.
First, I ran a small sample of 2000 requests to check the performance of new deploys. This was running at around 40 requests per second.
Then, I ran a larger test of 15000 requests to see how each architecture performed when they are warmed up. For this larger test, I was sending around 100 requests per second.
Let’s check the results in order.
When I ran my initial Fargate warmup, I got the following results:
screen shot 2019-02-19 at 7 33 08 pm
Around 10% of my requests were failing altogether!
When I dug in, it looked like I was overwhelming my container instances, causing them to die.
I’m not a Docker or Flask performance expert, and that’s not the goal of this exercise. To remedy this, I decided to bump the specs on my deployments.
The general goal for this bakeoff is to get a best-case outcome for each of these architectures, rather than an apples-to-apples comparison of cost vs performance.
For Fargate, this meant deploying 50 instances of my container with pretty beefy settings — 8 GB of memory and 4 full CPU units per container instance.
For the Lambda service, I set memory to the maximum of 3GB.
For APIG service proxy, there are no knobs to tune. 🎉
With that out of the way, let’s check the initial results.
Initial warmup results
For the first 2000 requests to each type of endpoint, the performance results are as follows:
api performance results -- warmup
Note: Chart using a log scale
The raw data for the results are:
Endpoint type # requests 50% 66% 75% 80% 90% 95% 98% 99% 100%
APIG Service Proxy 2051 80 90 110 120 150 190 220 250 520
AWS Lambda 2084 94 100 110 120 150 180 210 290 5100
Fargate 2047 68 73 76 80 110 110 130 140 550
Takeaways from the warmup test
1. Fargate was consistently the fastest across all percentiles.
2. AWS Lambda had the longest tail on all of them. This is due to the cold start problem.
3. API Gateway service proxy outperformed AWS Lambda at the median, but performance in the upper-middle of the range (75% - 99%) was pretty similar between the two.
Now that we’ve done our warmup test, let’s check out the results from the full performance test.
Full performance test results
For the main part of the performance test, I ran 15,000 requests at each of the three architectures. I planned to use 500 ‘users’ in Locust to accomplish this, though, as noted below, I had to make some modifications for Fargate.
First, let’s check the results:
api performance results -- full test
Note: Chart using a log scale
The raw data for the results are:
Endpoint type # requests 50% 66% 75% 80% 90% 95% 98% 99% 100%
APIG Service Proxy 15185 73 79 84 90 130 180 250 290 670
AWS Lambda 15249 86 92 98 110 140 160 180 220 920
Fargate 15057 69 72 75 77 91 110 130 170 800
Takeaways from the full performance test
1. Fargate was still the fastest across the board, though the gap narrowed. API Gateway service proxy was nearly as fast as Fargate at the median, and AWS Lambda wasn’t far behind.
2. The real differences show up between the 80th and 99th percentile. Fargate had a lot more consistent performance as it moved up the percentiles. The 98th percentile request for Fargate is less than double the median (130ms vs 69ms, respectively). In contrast, the 98th percentile for API Gateway service proxy was more than triple the median (250ms vs 73ms, respectively).
3. AWS Lambda outperformed the API Gateway service proxy at some higher percentiles. Between the 95th and 99th percentiles, AWS Lambda was actually faster than the API Gateway service proxy. This was surprising to me.
I mentioned above that I wanted to use 500 Locust ‘users’ when testing the application. Both AWS Lambda and API Gateway service proxy handled 15000+ requests without a single error.
With Fargate, I consistently had failed requests:
screen shot 2019-02-20 at 6 25 16 am
I finally throttled it down to 200 Locust users when testing for Fargate, which got my error rate down to around 3% of overall requests. Still, this was infinitely higher than the error with AWS Lambda.
I’m not saying you can’t deploy a Fargate service without tolerating a certain percentage of failures. Rather, performance tuning Docker containers was more time than I wanted to spend on a quick performance test.
UPDATED NOTES ON FARGATE ERRORS
I’ve gotten some pushback saying that the test is worthless due to the Fargate errors, or that I was way over-provisioned on Fargate.
A few notes on that:
First, Nathan Peck, an awesome and helpful container advocate at AWS, reached out to say the failures were likely around some system settings like the ‘nofile’ ulimit.
That sounds pretty reasonable to me, but I haven’t taken the time to test it out. I don’t have huge interest in digging deep into container performance tuning for this. If that’s something you’re into, let me know and I’ll link to your results if they’re interesting!
The key points on Fargate are:
1. You can get much lower failure rates than I got. You’ll just need to tune it.
2. I didn’t use 50 instances with a ton of CPU and memory because I thought Fargate needed it. I used it because I didn’t want to think about resource exhaustion at all (even though I did end up hitting the open file limits). I was going for a best-case scenario — if the load balancer, container, and SNS are all humming, what kind of latency can we get?
3. I don’t think this invalidates the general results of what a basic ‘optimistic-case’ could look like with Fargate within these general constraints (multiple instances + Python + calling SNS).
If you’re making a million dollar decision on this, you should run your own tests.
If you want a quick, fun read, these results should be directionally correct.
This was a fun and enlightening experience for me, and I hope it was helpful for you. There’s not a clear right answer on which architecture you should use based on these performance results.
Here’s how I think about it:
* Do you need high performance? Using dedicated instances with Fargate (or ECS/EKS/EC2) is your best best. This will require more setup and infrastructure management, but that may be necessary for your use case. * Is your business logic limited? If so, use API Gateway service proxy. API Gateway service proxy is a performant, low-maintenance way to stand up endpoints and forward data to another AWS service. * In the vast number of other situations, use AWS Lambda. Lambda is dead-simple to deploy (if you’re using a deployment tool). It’s reliable and scalable. You don’t have to worry about tuning a bunch of knobs to get solid performance. And it’s code, so you can do anything you want. I use it for almost everything.
HackerNewsBot debug: Calculated post rank: 88 - Loop: 425 - Rank min: 80 - Author rank: 127
A TCO comparison between the Lambda Hyperplane 8 x V100 Server and the AWS p3dn.24xlarge instance. The Hyperplane cost comparison is very similar to that of the DGX-1.
Article word count: 973
HN Discussion: https://news.ycombinator.com/item?id=19196328
Posted by rbranson (karma: 3955)
Post stats: Points: 104 - Comments: 96 - 2019-02-19T03:09:53Z
#HackerNews #aws #comparison #cost #instance #on-prem #server #v100
Deep Learning requires GPUs, which are very expensive to rent in the cloud. In this post, we compare the cost of buying vs. renting a GPU server. We use AWSʼs p3dn.24xlarge as the cloud point of comparison. Hereʼs what weʼll do:
* Select a server with similar hardware to AWSʼs p3dn.24xlarge * Compare Deep Learning performance of the selected server vs. the p3dn.24xlarge * Compare Total Cost of Ownership (TCO) of the selected server vs. the p3dn.24xlarge
Selecting a server similar to a p3dn.24xlarge
We use the Lambda Hyperplane - Tesla V100 Server, which is similar to the NVIDIAʼs DGX-1. Hereʼs a side-by-side hardware comparison with the p3dn.25xlarge:
p3dn.24xlarge Lambda Hyperplane
GPU 8x NVIDIA Tesla V100 (32 GB) 8x NVIDIA Tesla V100 (32 GB)
NVLink Hybrid Cube Mesh Topology Hybrid Cube Mesh Topology
CPU Intel Xeon P-8175M (24 cores) Intel Xeon Platinum 8168 (24 cores)
Storage NVMe SSD NVMe SSD
Software Deep Learning AMI (Ubuntu) Version 20.0 Lambda Stack
The purchased Tesla V100 Server...
* Is 2.6% faster than AWSʼs p3dn.24xlarge for FP32 training * Is 3.2% faster than AWSʼs p3dn.24xlarge for FP16 training * Has a Total Cost of Ownership (TCO) thatʼs $69,441 less than a p3dn.24xlarge 3-year contract with partial upfront payment. Our TCO includes energy, hiring a part-time system administrator, and co-location costs. In addition, you still get value from the system after three years, unlike the AWS instance.
TCO (Total Cost of Ownership) Comparison
Letʼs break out the comparison based on the three choices you have from AWS for a 3-year contract: 0% upfront, partial upfront, and 100% upfront. The Hyperplane is 100% upfront because youʼre purchasing the hardware.
AWS (0%) AWS (Partial) AWS (100%) Hyperplane
Upfront $0 $126,729 $238,250 $109,008
Annual rental $91,244 $42,241 $0 $0
Annual Co-Location Cost $0 $0 $0 $15,000
Annual Admin Cost $0 $0 $0 $10,000
Total Cost Over 3-years $273,732 $253,444 $238,250 $184,008
In all cases, including co-lo and admin costs, the Hyperplane on-prem server beats AWS. Our co-location cost was based on quotation averages from Equinix and Hurricane Electric. Our annual administration cost was based on quotes for data center co-location administration from IT service providers. We encourage you to calculate your own co-location and administration costs and input your own numbers into this model.
If you purchase your own hardware, you can use the Lambda Stack Deep Learning environment to manage your systemʼs drivers, libraries, and frameworks.
* Pre-installed GPU-enabled TensorFlow, Keras, PyTorch, Caffe, Caffe 2, Theano, CUDA, cuDNN, and NVIDIA GPU drivers. * If a new version of any framework is released, Lambda Stack manages the upgrade. Using Lambda Stack greatly reduces package management & Linux system administration overhead. * Dockerfiles available for creating a Lambda Stack container.
The equivalent software is the AWS Deep Learning AMI with Ubuntu. For the DGX-1 youʼll need to purchase a license to the NVIDIA GPU Cloud container registry.
Benchmark I: Synthetic Data
To confirm that youʼre getting your moneyʼs worth in terms of compute, letʼs do some benchmarks. We first use the official Tensorflow benchmark suite to compare the raw training throughput of these two servers. Synthetic data is used to isolate GPU performance from CPU pre-processing performance and reduce spurious I/O bottlenecks. We use replicated training with NCCL to maximize the benefit of NVlink in both machines. Similarly, batch sizes are set to maximize the utilization of V100ʼs 32GB memory.
Our benchmark results showed that Lambda Hyperplane (red) outperformed p3dn.24xlarge (blue) in ALL tasks:
Training with synthetic data reveals GPU horsepower in terms of throughput and device-to-device bandwidth. However, real-world performance can be impacted by other factors such as I/O bottleneck, CPU speed, etc. At the end of the day, what matters is how the entire system, including hardware and software, works together. Next, we will compare these two servers using a real-world training task.
Benchmark II: Real Data Convergence Speed (ResNet50 on ImageNet)
For our real data benchmark, we use Stanford DAWNBench -- the de-facto benchmark suite for end-to-end deep learning training and inference. The task is to finish the 26 training epochs as described here: Now anyone can train Imagenet in 18 minutes. At the time of writing, this represents the fastest way to train a state of the art image classification network on ImageNet. The original blog post reported 18 minutes for a cluster of 16 AWS p3.16xlarge instances to finish the task ($117.5 on demand cost, or $76.37 with a one-year subscription plan). We are interested to know how costly it is to finish the same task with a single beefy server. This training task completes when it reaches 93% in Top-5 classification accuracy.
We first reproduced the training procedure on an AWS p3dn.24xlarge instance. It took 1.63 hours to finish. This translates to $50.98 with on-demand instance pricing or $15.75 with a 3-year partially reserved instance contract.
We then reproduced the same training procedure on a Lambda Hyperplane, it took 1.45 hours. That time cost translates to just $10.15 based on our 3-year TCO.
AWS p3dn.24xlarge Lambda Hyperplane
Epochs 26 26
Training Duration 1 hour, 38 minutes 1 hour, 27 minutes
Training Cost $15.75 $10.15
In this blog, we benchmarked the Lambda Hyperplane and compared it with the performance of the AWS p3dn.24xlarge – the fastest AWS instance for training deep neural networks. We observed that Lambda Hyperplane is not only faster but also significantly more cost-effective. With larger on-prem or co-located cluster deployments, there are even further costs savings and performance benefits. System administration costs donʼt rise as quickly as the number of machines in the cluster and the machines can benefit from 100Gbps InfiniBand connections under the same rack. However, actual multi-node performance is outside of the scope of this TCO analysis.
Reproducing these results
You can use our Github repos to reproduce these benchmarks.
Benchmark I: Synthetic Data
git clone https://github.com/lambdal/lambda-tensorflow-benchmark.git --recursive
Benchmark II: Real Data (ResNet50 on ImageNet)
git clone https://github.com/lambdal/imagenet18.git
If you have any questions, please comment below and weʼll happily answer them for you. If you have any direct questions you can always email email@example.com.
HackerNewsBot debug: Calculated post rank: 101 - Loop: 154 - Rank min: 100 - Author rank: 35
At Tuesday Night Live with James Hamilton at the 2016 AWS re:Invent conference, I introduced the first Amazon Web Services custom silicon. The ASIC I showed formed the foundational core of our second…
Article word count: 1117
HN Discussion: https://news.ycombinator.com/item?id=19187456
Posted by ingve (karma: 97710)
Post stats: Points: 135 - Comments: 7 - 2019-02-17T23:02:13Z
#HackerNews #aws #nitro #system
At Tuesday Night Live with James Hamilton at the 2016 AWS re:Invent conference, I introduced the first Amazon Web Services custom silicon. The ASIC I showed formed the foundational core of our second generation custom network interface controllers and, even back in 2016, there was at least one of these ASICs going into every new server in the AWS fleet. This work has continued for many years now and this part and subsequent generations form the hardware basis of the AWS Nitro System. The Nitro system is used to deliver these features for AWS Elastic Compute Cluster (EC2) instance types:
1. High speed networking with hardware offload
2. High speed EBS storage with hardware offload
3. NVMe local storage
4. Remote Direct Memory Access (RDMA) for MPI and Libfabric
5. Hardware protection/firmware verification for bare metal instances
6. All business logic needed to control EC2 instances
We continue to consume millions of the Nitro ASICs every year so, even though it’s only used by AWS, it’s actually a fairly high volume server component. This and follow-on technology has been supporting much of the innovation going on in EC2 but haven’t had a chance to get into much detail on how Nitro actually works.
At re:Invent 2018 Anthony Liguori, one of the lead engineers on the AWS Nitro System project gave what was, at least for me, one of the best talks at re:Invent outside of the keynotes. It’s worth watching the video (URL below) but I’ll cover some of what Anthony went through in his talk here.
The Nitro System powers all EC2 Instance types over the last couple of years. There are three major components:
1. Nitro Card I/O Acceleration
2. Nitro Security Chip
3. Nitro Hypervisor
Different EC2 server instance types include different Nitro System features and some server types have many Nitro System cards that implement the five main features of the AWS Nitro System:
These features formed the backbone for Anthony Liguori’s 2018 re:Invent talk and he went through some of the characteristics of each.
Nitro Card for VPC
The Nitro card for VPC is essentially a PCIe attached Network Interface Card (NIC) often called a network adapter or, in some parts of the industry, a network controller. This is the card that implements the hardware interface between EC2 servers and the network connection or connections implemented on that server type. And, like all NICs, interfacing with it requires that there be a specific device driver loaded to support communicating with the network adapter. In the case of AWS NICs, the Elastic Network Adapter (ENA) is the device driver support for our NICs. This driver is now included in all major operating systems and distributions.
The Nitro Card for VPC supports network packet encapsulation/decapsulation, implements EC2 security groups, enforces limits, and is responsible for routing. Having these features implemented off of the server hardware rather than in the hypervisor allows customers to fully use the underlying server hardware without impacting network performance, impacting other users, and we don’t have to have some server cores unavailable to customers to handle networking tasks. And, it also allows secure networking support without requiring server resources to be reserved for AWS use. The largest instance types get access to all server cores.
It wasn’t covered in the talk but the Nitro Card for VPC also supports Remote Direct Memory Access (RDMA) networking. The Elastic Fabric Adapter (EFA) supports both the OpenFabrics Alliance Libfabric API or the popular Message Passing Interface (MPI). These APIs both provide network access with operating system bypass when used with EFA. MPI is in common use in high performance computing applications and, to a lesser extent, in latency sensitive data intensive applications and some distributed databases.
Nitro Card for EBS
The Nitro Card for EBS supports storage acceleration for EBS. All instance local storage is implemented as NVMe devices and the Nitro Card for EBS supports transparent encryption, limits to protect the performance characteristics of the system for other users, drive monitoring to monitor SSD wear, and it also supports bare metal instance types.
Remote storage is again implemented as NVMe devices but this time as NVMe over Fabrics supporting access to EBS volumes again with encryption and again without impacting other EC2 users and with security even in a bare metal environment.
The Nitro card for EBS was first launched in the EC2 C4 instance family.
Nitro Card for Instance Storage
The Nitro Card for Instance storage also implements NVMe (Non-Volatile Memory for PCIe) for local EC2 instance storage.
Nitro Card Controller
The Nitro Card Controller coordinates all other Nitro cards, the server hypervisor, and the Nitro Security Chip. It implements the hardware root of trust using the Nitro Security Chip and supports instance monitoring functions. It also implements the NVMe controller functionality for one or more Nitro Cards for EBS.
Nitro Security Chip
The Nitro security chip traps all I/O to non-volatile storage including BIOS and all I/O device firmware and any other controller firmware on the server. This is a simple approach to security where the general purpose processor is simply unable to change any firmware or device configuration. Rather than accept the error prone and complex task of ensuring access is approved and correct, no access is allowed. EC2 servers can’t update their firmware. This is GREAT from a security perspective but the obvious question is how is the firmware updated. It’s updated using by AWS and AWS only through the Nitro System.
The Nitro Security Chip also implements the hardware root of trust. This system replaces 10s of millions of lines of code that for the Unified Extensible Firmware Interface (UEFI) and supports secure boot. In starts the server up untrusted, then measures every firmware system on the server to ensure that none have been modified or changed in any unauthorized way. Each checksum (device measure) is checked against the verified correct checksum stored in the Nitro Security Chip.
The Nitro System supports key network, server, security, firmware patching, and monitoring functions freeing up the entire underlying server for customer use. This allows EC2 instances to have access to all cores – none need to be reserved for storage or network I/O. This both gives more resources over to our largest instance types for customer use – we don’t need to reserve resource for housekeeping, monitoring, security, network I/O, or storage. The Nitro System also makes possible the use of a very simple, light weight hypervisor that is just about always quiescent and it allows us to securely support bare metal instance types.
More data on the AWS Nitro System from Anthony Liguori, one of the lead engineers behind the software systems that make up the AWS Nitro System:
Three Keynotes for a fast past view for what’s new across all of AWS:
HackerNewsBot debug: Calculated post rank: 92 - Loop: 428 - Rank min: 80 - Author rank: 128
While Amazon's cloud business was only 11% of its overall sales last year, it delivered more operating income than all other business units combined.
Article word count: 525
HN Discussion: https://news.ycombinator.com/item?id=19058702
Posted by petercooper (karma: 39213)
Post stats: Points: 154 - Comments: 56 - 2019-02-01T21:15:19Z
\#HackerNews #amazons #aws #drives #half #income #more #operating #than
For Amazon, the cloud is the little engine that could. Amazon Web Services comprised just 11% of the companyʼs overall sales in 2018, but delivered more operating income than all other business units combined.
In financial results delivered Thursday, Amazon Web Services Inc. drove $7.43 billion sales for the quarter ending December 31, up 45% from $5.11 billion year-over-year. (See Amazon Reports Q4 Sales Up 20% to $72.4B.)
Thatʼs the same year-over-year growth rate (Q4 2018 over Q4 2017) for AWS that it saw in the year-ago quarter (Q4 2017 over Q4 2016).
Operating income for AWS was $2.17 billion, up from $1.35 billion in the year-ago quarter.
For calendar 2018, AWS sales were $25.65 billion, up 47% from $17.45 billion year-over-year. Sales growth accelerated in 2018 over 2017 compared with 2017 over 2016, which saw 43% sales growth.
Amazon is cleaning up in the cloud, much as the authorʼs cat cleans itself in this Amazon box.
Amazon is cleaning up in the cloud, much as the authorʼs cat cleans itself in this Amazon box.
For AWSʼs sales growth to accelerate in that fashion is noteworthy because AWS is by far the leader in cloud market share, ahead of second-ranked Microsoft Corp. (Nasdaq: MSFT) by a wide margin; accelerating growth speaks to overall strength for AWS and the cloud market in general.
Operating income for AWS was $7.29 billion in 2018, up from $4.31 billion in 2017.
AWS represents a growing share of Amazon.com Inc. (Nasdaq: AMZN)ʼs overall revenue (by a smidge) -- 11% in 2018 compared with 10% in the previous year, and 10% in the fourth quarter of 2018, compared with 8% in the year-ago quarter.
And AWS is the lionʼs share of Amazonʼs operating income: Nearly 58% in the fourth quarter of 2018, and nearly 59% for the full year.
For Amazon overall, including its mainstay retail business, for the quarter, sales were up 20% to $72.4 billion after a strong holiday season. Net income increased to $3 billion in the fourth quarter, or $6.04 per diluted share, compared with $1.9 billion or $3.75 per diluted share in the year-ago quarter.
For the full year, net sales were up 31% to $232.9 billion. Net income increased to $10.1 billion, or $20.14 per diluted share, compared with $3 billion or $6.15 per diluted share in 2017.
The Echo Dot was the best-selling item across all products on Amazon globally, Amazon CEO Jeff Bezos said in its earnings press release. "Alexa was very busy during her holiday season," he said.
For the first quarter of 2019, Amazon anticipates net sales of $56 billion to $60 billion, up 10-18% year over year. Operating income is expected to be $2.3 billion to $3.3 billion, compared with $1.9 billion in the first quarter of 2018.
Amazon stock traded at $1,634 after hours, down 4.93%.
Amazon is the second major US cloud provider to report quarterly earnings this month. Microsoft reported that Azure revenues rose 76% Wednesday -- not an apples-to-apples comparison with AWS, as Microsoft breaks out its cloud finances in multiple pieces and reports those pieces mixed in with other business units. (See Microsoft Azure Revenues Climb 76%.)
— Mitch Wagner Visit my LinkedIn profileFollow me on TwitterJoin my Facebook GroupRead my blog: Things Mitch Wagner Saw Executive Editor, Light Reading
 [IMG](0) |
HackerNewsBot debug: Calculated post rank: 121 - Loop: 137 - Rank min: 100 - Author rank: 49