What is the difference between scale-out versus scale-up (architecture, applications, etc.)?
The terms "scale up" and "scale out" are commonly used in discussing different strategies for adding functionality to hardware systems. They are fundamentally different ways of addressing the need for more processor capacity, memory and other resources.
Scaling up generally refers to purchasing and installing a more capable central control or piece of hardware. For example, when a project’s input/output demands start to push against the limits of an individual server, a scaling up approach would be to buy a more capable server with more processing capacity and RAM.
By contrast, scaling out means linking together other lower-performance machines to collectively do the work of a much more advanced one. With these types of distributed setups, it's easy to handle a larger workload by running data through different system trajectories.
There are a variety of benefits and disadvantages to each approach. Scaling up can be expensive, and ultimately, some experts argue that it's not viable because of the limits to individual hardware pieces on the market. However, it does make it easier to control a system, and to provide for certain data quality issues.
One of the main reasons for the popularity of scaling out is that this approach is what's behind a lot of the big data initiatives done today with tools like Apache Hadoop. Here, central data handling software systems administrate huge clusters of hardware pieces, for systems that are often very versatile and capable. However, experts are now beginning to debate the use of scaling up and scaling out, looking at which kind of approach is best for any given project.
'개발 > HADOOP_SPARK_ETC' 카테고리의 다른 글
yarn [펌] (0) | 2017.03.14 |
---|---|
하둡의 진화 - 얀 (0) | 2017.03.14 |
hadoop 도입시 고려사항 [펌] (0) | 2017.03.14 |
spark + s3 + r3 (0) | 2017.03.07 |
하둡 기초 - 2 (0) | 2017.02.22 |
댓글