A Blog by Jonathan Low

 

Nov 11, 2013

Amazon Feels Pressure to Lower Cost of Cloud Big Data Processing

Maybe we should call it the 'never good enough' economy. Records are made to be broken and there is no rest for weary, etc, but the relentlessness of the pressure to recalibrate what is acceptable to customers and the enterprises which supply them never appears to wane.

Even institutions with dominant positions providing hard-to-scale services such as Amazon does with cloud computing are feeling the pressure, as the following article explains.

Why this segment of the technology industry should be different from any other seems a reasonable question. And the answer is, that it shouldnt or perhaps more to the point, couldnt. There are too many smart, skilled and capable people working for would-be competitors looking for an edge in whatever corner of the services segment for anyone to rest on their laurels.

What may be new is the speed with this fall from comfort to threat occurs. The larger point is that no position is unassailable, no status sustainable in a world where the nexus of competition is intelligence, skill and ambition. JL

Jordan Novet reports in VentureBeat:

Maybe existing big data services on Amazon’s cloud — such as Elastic MapReduce for Hadoop, DynamoDB for hosted NoSQL database needs, and the Redshift data-warehouse service — aren’t enough.
Amazon Web Services, the currently undisputed heavyweight champion of cloud infrastructure providers, is looking at adding big-data services to its already wide spectrum of capabilities, with new programs that will simplify and lower the cost of crunching torrents of information coming down hard and fast.
Andy Jassy, the Amazon cloud’s senior vice president, alluded to the plans in an interview AllThingsD published today:
And I think you’ll see ius [sic] adding capabilities for companies with large data sets that want to do compute and processing, and then make that data useful. That’s the whole big data thing that everyone is talking about. I think you’ll see us add services there that make it easier and less expensive for customers to do that.
Let’s break it down. In the NoSQL area, MongoDB, for one, has serious mindshare and a large user base — surely you recall that $150 million funding round from last month.
Amazon cloud competitors have been advancing their own Hadoop offerings, with news coming last week from Rackspace, Windows Azure, IBM’s SoftLayer, Verizon Terremark, and CenturyLink’s Savvis.
Meanwhile, data-warehouse territory got rocked two weeks ago with Teradata announcing it was bringing its well used data warehouse into the cloud as a service. That matters because Teradata has loads of big customers that might be enticed by the idea of analyzing data in the cloud while sticking with a trusted vendor.
What might Amazon want to roll out? One worthy idea would be something along the lines of Joyent’s Manta service, which computes data where it is stored. Then again, maybe Amazon will try its hand at a “bare-metal” offering that does not virtualize servers. That configuration has been popular on SoftLayer’s infrastructure, which is increasingly important now that SoftLayer is part of IBM.

Of course, some companies might be hesitant to store valuable data on external infrastructure. Revelations on the National Security Agency’s snooping this year could divert revenue that otherwise would have gone to cloud companies. But hey, we’ve seen that the CIA is willing to use the cloud, particularly Amazon’s. Perhaps sentiment on cloud security this year will turn out to be a wash. What could change, though, is adoption of big data services among startups and larger organizations, if Amazon does follow through on Jassy’s latest statements about making big data in the cloud easier and cheaper. And given Amazon’s scale and track record of popularizing cloud tools, it’s hard to doubt the company can’t pull it off

0 comments:

Post a Comment