Demystifying Cloud Latency

简介: In the days before the ubiquitous Internet, understanding latency was relatively straightforward. You simply counted the number of router hops between


In the days before the ubiquitous Internet, understanding latency was relatively straightforward. You simply counted the number of router hops between you and your application. Network latency was essentially the delays that the data packets experienced when travelling from the source, past the hops to your application.

Large enterprises had this largely under their control. You would own most if not all the routers. There would be network delays, but these were measurable and predictable, allowing you to improve on it while setting expectations. The internet changed this. In a shared, off-premise infrastructure, calculating network latency is now complex. The subtleties, especially those involving the cloud service providers’ infrastructure and your link to the data center, play a huge rule. And they can impact latency in ways we readily do not appreciate. At the same time, managing latency is becoming crucial. As more users live and breathe technology, they believe fast connectivity is a given. With consumers having easy access to high-speed broadband over wire or wireless, they expect enterprise networks to be of the same vein.

Cloud has made the subject even more pressing. As many enterprises look to benefit from public shared infrastructures for cost-efficiency, scalability and agility, they are shifting their in-house server-oriented IT infrastructure into a network-oriented one that is often managed and hosted by a service provider. With the rise of machine-to-machine decision making, automation, cognitive computing and high-speed businesses like high-frequency trading easier, network latency is in the spotlight, with adoption, reputation, revenues and customer satisfaction now tied to it.

As applications become latency sensitive, especially with near zero tolerance to lag and delays by end users, application development is also influenced by network latency.

note:If you want to learn more, please download the attachment.

目录
相关文章
|
11月前
Query Performance Optimization at Alibaba Cloud Log Analytics Service
PrestoCon Day 2023,链接:https://prestoconday2023.sched.com/event/1Mjdc?iframe=no首页自我介绍,分享题目概要各个性能优化项能够优化的资源类别limit快速短路有什么优点?有啥特征?进一步的优化空间?避免不必要块的生成逻辑单元分布式执行,global 阶段的算子哪些字段无需输出?公共子表达式结合FilterNode和Proje
Query Performance Optimization at Alibaba Cloud Log Analytics Service
|
存储 负载均衡 网络性能优化
|
微服务 Spring Java
spring cloud心跳检测自我保护(EMERGENCY! EUREKA MAY BE INCORRECTLY CLAIMING INSTANCES ARE UP WHEN THEY'RE NOT. RENEWALS ARE LESSER THAN THRESHOLD AND HENCE THE
EMERGENCY! EUREKA MAY BE INCORRECTLY CLAIMING INSTANCES ARE UP WHEN THEY'RE NOT. RENEWALS ARE LESSER THAN THRESHOLD AND HENCE THE INSTANCES ARE NOT BEING EXPIRED JUST TO BE SAFE. Eureka server和client之间每隔30秒会进行一次心跳通信,告诉server,client还活着 在某一些时候注册在Eureka的服务已经挂掉了,但是服务却还留在Eureka的服务列表的情况。
7973 0
SAP cloud platform 504 gateway time out Cloud connector
SAP cloud platform 504 gateway time out Cloud connector
133 0
SAP cloud platform 504 gateway time out Cloud connector
|
应用服务中间件 Apache nginx
理解Latency和Throughput。
Latency,中文译作延迟。Throughput,中文译作吞吐量。它们是衡量软件系统的最常见的两个指标。
1920 0
|
监控 Java 关系型数据库
Cloud Native 与12-Factor
12-Factor(twelve-factor),也称为“十二要素”,是一套流行的应用程序开发原则。Cloud Native架构中使用12-Factor作为设计准则。 12-Factor 的目标在于: 使用标准化流程自动配置,从而使新的开发者花费最少的学习成本加入项目中。
2618 0
|
NoSQL 大数据 API
GT-Scan2: Bringing Bioinformatics to Alibaba Cloud
Learn how Alibaba Cloud powers the cutting-edge genome sequence search tool, GT-Scan2, with its suite of big data products and serverless computing platform.
1585 0
|
流计算
Behind the Scenes with Alibaba Cloud's 8K Live Streaming
Learn about 8K and discover how Alibaba Cloud realized the world's first Internet-based 8K live streaming solution.
3103 0
Behind the Scenes with Alibaba Cloud's 8K Live Streaming

热门文章

最新文章