Key Features
This book contains recipes on how to use Apache Spark as a unified compute engine
Cover how to connect various source systems to Apache Spark
Covers various parts of machine learning including supervised/unsupervised learning & recommendation engines
Book Description
While Apache Spark 1.x gained a lot of traction and adoption in the early years, Spark 2.x delivers notable improvements in the areas of API, schema awareness, Performance, Structured Streaming, and simplifying building blocks to build better, faster, smarter, and more accessible big data applications. This book uncovers all these features in the form of structured recipes to analyze and mature large and complex sets of data.
Starting with installing and configuring Apache Spark with various cluster managers, you will learn to set up development environments. Further on, you will be introduced to working with RDDs, DataFrames and Datasets to operate on schema aware data, and real-time streaming with various sources such as Twitter Stream and Apache Kafka. You will also work through recipes on machine learning, including supervised learning, unsupervised learning & recommendation engines in Spark.
Last but not least, the final few chapters delve deeper into the concepts of graph processing using GraphX, securing your implementations, cluster optimization, and troubleshooting.
What you will learn
Install and configure Apache Spark with various cluster managers & on AWS
Set up a development environment for Apache Spark including Databricks Cloud notebook
Find out how to operate on data in Spark with schemas
Get to grips with real-time streaming analytics using Spark Streaming & Structured Streaming
Master supervised learning and unsupervised learning using MLlib
Build a recommendation engine using MLlib
Graph processing using GraphX and GraphFrames libraries
Develop a set of common applications or project types, and solutions that solve complex big data problems
This book contains recipes on how to use Apache Spark as a unified compute engine
Cover how to connect various source systems to Apache Spark
Covers various parts of machine learning including supervised/unsupervised learning & recommendation engines
Book Description
While Apache Spark 1.x gained a lot of traction and adoption in the early years, Spark 2.x delivers notable improvements in the areas of API, schema awareness, Performance, Structured Streaming, and simplifying building blocks to build better, faster, smarter, and more accessible big data applications. This book uncovers all these features in the form of structured recipes to analyze and mature large and complex sets of data.
Starting with installing and configuring Apache Spark with various cluster managers, you will learn to set up development environments. Further on, you will be introduced to working with RDDs, DataFrames and Datasets to operate on schema aware data, and real-time streaming with various sources such as Twitter Stream and Apache Kafka. You will also work through recipes on machine learning, including supervised learning, unsupervised learning & recommendation engines in Spark.
Last but not least, the final few chapters delve deeper into the concepts of graph processing using GraphX, securing your implementations, cluster optimization, and troubleshooting.
What you will learn
Install and configure Apache Spark with various cluster managers & on AWS
Set up a development environment for Apache Spark including Databricks Cloud notebook
Find out how to operate on data in Spark with schemas
Get to grips with real-time streaming analytics using Spark Streaming & Structured Streaming
Master supervised learning and unsupervised learning using MLlib
Build a recommendation engine using MLlib
Graph processing using GraphX and GraphFrames libraries
Develop a set of common applications or project types, and solutions that solve complex big data problems
No comments:
Post a Comment