This comes as no big surprise as Spark’s architecture is memory-centric. If we were to got all Spark developers to vote, out of memory (OOM) conditions would surely be the number one problem everyone has faced. The first and most common is memory management. In this series of articles, I aim to capture some of the most common reasons why a Spark application fails or slows down. ![]() It's not only important to understand a Spark application, but also its underlying runtime components like disk usage, network usage, contention, etc., so that we can make an informed decision when things go bad. Sometimes an application which was running well starts behaving badly due to resource starvation. Sometimes a well-tuned application might fail due to a data change, or a data layout change. ![]() However, it becomes very difficult when Spark applications start to slow down or fail. ![]() Spark applications are easy to write and easy to understand when everything goes according to plan.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |