Apache Hadoop YARN: Moving beyond MapReduce and Batch Processing with Apache Hadoop 2

· · · ·
· Addison-Wesley Professional
4,3
3 komente
Libër elektronik
99998
Faqe
I përshtatshëm

Rreth këtij libri elektronik

“This book is a critically needed resource for the newly released Apache Hadoop 2.0, highlighting YARN as the significant breakthrough that broadens Hadoop beyond the MapReduce paradigm.”
—From the Foreword by Raymie Stata, CEO of Altiscale


The Insider’s Guide to Building Distributed, Big Data Applications with Apache Hadoop™ YARN

Apache Hadoop is helping drive the Big Data revolution. Now, its data processing has been completely overhauled: Apache Hadoop YARN provides resource management at data center scale and easier ways to create distributed applications that process petabytes of data. And now in Apache Hadoop™ YARN, two Hadoop technical leaders show you how to develop new applications and adapt existing code to fully leverage these revolutionary advances.

YARN project founder Arun Murthy and project lead Vinod Kumar Vavilapalli demonstrate how YARN increases scalability and cluster utilization, enables new programming models and services, and opens new options beyond Java and batch processing. They walk you through the entire YARN project lifecycle, from installation through deployment.

You’ll find many examples drawn from the authors’ cutting-edge experience—first as Hadoop’s earliest developers and implementers at Yahoo! and now as Hortonworks developers moving the platform forward and helping customers succeed with it.

Coverage includes

  • YARN’s goals, design, architecture, and components—how it expands the Apache Hadoop ecosystem
  • Exploring YARN on a single node
  • Administering YARN clusters and Capacity Scheduler
  • Running existing MapReduce applications
  • Developing a large-scale clustered YARN application
  • Discovering new open source frameworks that run under YARN

Vlerësime dhe komente

4,3
3 komente

Rreth autorit

Arun Murthy has contributed to Apache Hadoop full-time since the inception of the project in early 2006. He is a long-term Hadoop committer and a member of the Apache Hadoop Project Management Committee. Previously, he was the architect and lead of the Yahoo Hadoop MapReduce development team and was ultimately responsible, technically, for providing Hadoop MapReduce as a service for all of Yahoo--currently running on nearly 50,000 machines. Arun is the founder and architect of the Hortonworks Inc., a software company that is helping to accelerate the development and adoption of Apache Hadoop. Hortonworks was formed by the key architects and core Hadoop committers from the Yahoo! Hadoop software engineering team in June 2011. Funded by Yahoo! and Benchmark Capital, one of the preeminent technology investors, their goal is to ensure that Apache Hadoop becomes the standard platform for storing, processing, managing, and analyzing big data.

Vinod Kumar Vavilapalli has been contributing to Apache Hadoop project full-time since mid-2007. At Apache Software Foundation, he is a long-term Hadoop contributor, Hadoop committer, member of the Apache Hadoop Project Management Committee, and a foundation member. Vinod is a MapReduce and YARN go-to guy at Hortonworks Inc. For more than five years, he has been working on Hadoop. He was involved in HadoopOnDemand, Hadoop-0.20, CapacityScheduler, Hadoop security, and MapReduce, and is now a lead developer and the project lead for Apache Hadoop YARN. Before Hortonworks, he was at Yahoo!, working in the Grid team that made Hadoop what it is today, running at large scale--up to tens of thousands of nodes. Vinod loves reading books of all kinds and is passionate about using computers to change the world for better, bit by bit. He has a bachelor’s degree in computer science and engineering from the Indian Institute of Technology Roorkee. He can be reached at twitter handle @tshooter.

Douglas Eadline, Ph.D., began his career as a practitioner and a chronicler of the Linux Cluster HPC revolution and now documents big data analytics. Starting with the first Beowulf How To document, Doug has written hundreds of articles, white papers, and instructional documents covering virtually all aspects of HPC computing. Prior to starting and editing the popular ClusterMonkey.net website in 2005, he served as editor­-in-­chief for ClusterWorld magazine, and was senior HPC editor for Linux Magazine. Currently, he is a consultant to the HPC industry and writes a monthly column in HPC Admin magazine. Both clients and readers have recognized Doug’s ability to present a “technological value proposition” in a clear and accurate style. He has practical, hands-on experience in many aspects of HPC, including hardware and software design, benchmarking, storage, GPU, cloud, and parallel computing. He is the author of Hadoop Fundamentals LiveLessons (video) from Addison-Wesley.

Joseph Niemiec is a big data solutions engineer whose focus is on designing Hadoop solutions for many Fortune 1000 companies. In this position, Joseph has worked with customers to build multiple YARN applications providing a unique perspective on moving customers beyond batch processing, and has worked on YARN development directly. An avid technologist, Joseph has been focused on technology innovations since 2001. His interest in data analytics originally started in game score optimization as a teenager, and has shifted to helping customers uptake new technology innovations such as Hadoop and, most recently, building new data applications using YARN.

Jeff Markham is a solution engineer at Hortonworks Inc., the company promoting open source Hadoop. Previously, he was with VMware, Red Hat, and IBM, helping companies build distributed applications with distributed data. He has written articles on Java application development and has spoken at several conferences and to Hadoop User Groups. Jeff is a contributor to Apache Pig and Apache HDFS.

Vlerëso këtë libër elektronik

Na trego se çfarë mendon.

Informacione për leximin

Telefona inteligjentë dhe tabletë
Instalo aplikacionin "Librat e Google Play" për Android dhe iPad/iPhone. Ai sinkronizohet automatikisht me llogarinë tënde dhe të lejon të lexosh online dhe offline kudo që të ndodhesh.
Laptopë dhe kompjuterë
Mund të dëgjosh librat me audio të blerë në Google Play duke përdorur shfletuesin e uebit të kompjuterit.
Lexuesit elektronikë dhe pajisjet e tjera
Për të lexuar në pajisjet me bojë elektronike si p.sh. lexuesit e librave elektronikë Kobo, do të të duhet të shkarkosh një skedar dhe ta transferosh atë te pajisja jote. Ndiq udhëzimet e detajuara në Qendrën e ndihmës për të transferuar skedarët te lexuesit e mbështetur të librave elektronikë.