Data Analysis with Python and PySpark

· 销售商:Simon and Schuster
电子书
456
符合条件

关于此电子书

Think big about your data! PySpark brings the powerful Spark big data processing engine to the Python ecosystem, letting you seamlessly scale up your data tasks and create lightning-fast pipelines.

In Data Analysis with Python and PySpark you will learn how to:

Manage your data as it scales across multiple machines
Scale up your data programs with full confidence
Read and write data to and from a variety of sources and formats
Deal with messy data with PySpark’s data manipulation functionality
Discover new data sets and perform exploratory data analysis
Build automated data pipelines that transform, summarize, and get insights from data
Troubleshoot common PySpark errors
Creating reliable long-running jobs

Data Analysis with Python and PySpark is your guide to delivering successful Python-driven data projects. Packed with relevant examples and essential techniques, this practical book teaches you to build pipelines for reporting, machine learning, and other data-centric tasks. Quick exercises in every chapter help you practice what you’ve learned, and rapidly start implementing PySpark into your data systems. No previous knowledge of Spark is required.

About the technology
The Spark data processing engine is an amazing analytics factory: raw data comes in, insight comes out. PySpark wraps Spark’s core engine with a Python-based API. It helps simplify Spark’s steep learning curve and makes this powerful tool available to anyone working in the Python data ecosystem.

About the book
Data Analysis with Python and PySpark helps you solve the daily challenges of data science with PySpark. You’ll learn how to scale your processing capabilities across multiple machines while ingesting data from any source—whether that’s Hadoop clusters, cloud data storage, or local data files. Once you’ve covered the fundamentals, you’ll explore the full versatility of PySpark by building machine learning pipelines, and blending Python, pandas, and PySpark code.

What's inside

Organizing your PySpark code
Managing your data, no matter the size
Scale up your data programs with full confidence
Troubleshooting common data pipeline problems
Creating reliable long-running jobs

About the reader
Written for data scientists and data engineers comfortable with Python.

About the author
As a ML director for a data-driven software company, Jonathan Rioux uses PySpark daily. He teaches the software to data scientists, engineers, and data-savvy business analysts.

Table of Contents

1 Introduction
PART 1 GET ACQUAINTED: FIRST STEPS IN PYSPARK
2 Your first data program in PySpark
3 Submitting and scaling your first PySpark program
4 Analyzing tabular data with pyspark.sql
5 Data frame gymnastics: Joining and grouping
PART 2 GET PROFICIENT: TRANSLATE YOUR IDEAS INTO CODE
6 Multidimensional data frames: Using PySpark with JSON data
7 Bilingual PySpark: Blending Python and SQL code
8 Extending PySpark with Python: RDD and UDFs
9 Big data is just a lot of small data: Using pandas UDFs
10 Your data under a different lens: Window functions
11 Faster PySpark: Understanding Spark’s query planning
PART 3 GET CONFIDENT: USING MACHINE LEARNING WITH PYSPARK
12 Setting the stage: Preparing features for machine learning
13 Robust machine learning with ML Pipelines
14 Building custom ML transformers and estimators

为此电子书评分

欢迎向我们提供反馈意见。

如何阅读

智能手机和平板电脑
只要安装 AndroidiPad/iPhone 版的 Google Play 图书应用,不仅应用内容会自动与您的账号同步,还能让您随时随地在线或离线阅览图书。
笔记本电脑和台式机
您可以使用计算机的网络浏览器聆听您在 Google Play 购买的有声读物。
电子阅读器和其他设备
如果要在 Kobo 电子阅读器等电子墨水屏设备上阅读,您需要下载一个文件,并将其传输到相应设备上。若要将文件传输到受支持的电子阅读器上,请按帮助中心内的详细说明操作。