- Milan Vojnovic, email, Department of Statistics.
*Office hours*: By appointment, COL 5.05

- Christine Yuen, email, Department of Statistics.
*Office hours*: Monday 13:00 - 14:00, COL 5.03 (from week 2)

- Lectures on Mondays 10:00–12:00 in TW2.2.04
- Classes on Thursdays 12:30–14:00 in TW2.4.01

No lectures or classes will take place during School Reading Week 6.

Week |
Topic |
---|---|

1 | Introduction |

2 | Distributed file systems and key-value stores |

3 | Distributed computation models |

4 | Structured queries over large datasets |

5 | Graph data processing Guest lecturer: Mark Needham, Software Engineer, Neo4j |

6 | Reading Week |

7 | Stream data processing Guest lecturer: Eno Thereska, Principal Engineer, Amazon |

8 | Scalable machine learning I Guest lecturer: Ulrich Paquet, Research Scientist, Google DeepMind |

9 | Scalable machine learning II Guest lecturer: Ryota Tomioka, Researcher, Microsoft Research |

10 | AI applications Guest lecturer: Marc Cohen, Software Engineer, Google |

11 | Numerical computations using data flow graphs |

This course covers the main principles and application programming interfaces of distributed systems for storing and processing big data. This includes the principles of distributed file systems, storage systems, and data models that are in common use in modern on-premise data analytics platforms and cloud computing services. The course covers the principles of computing over large datasets in distributed computing systems involving multi-core processors and cluster computing systems. Students will learn how to perform canonical distributed computing tasks in batch, interactive and stream processing settings and how to run scalable machine learning algorithms for regression, classification, clustering and collaborative filtering tasks.

This course uses a project-based learning approach where students gain hands-on experience in using computing tools, services and writing software code through computer workshop exercises and project assignments. This equips students with key skills and knowledge about modern computation platforms for processing big data. In particular, students gain hands-on experience in working with Apache Hadoop, an open-source software framework for distributed storage and processing of dataset of big data using the MapReduce programming model and other services such as Apache Hive. They also gain hands-on experience in working with Apache Spark, the fastest-growing general engine for processing big data, used across different industries, and connecting Spark with various data sources and other systems. The students learn how to run big data analytics tasks locally on their laptops as well as on distributed clusters of machines in the cloud. The students work on weekly exercises and project assignments by using GitHub, a popular revision-control and group collaboration tool. Each student develops code for solving one or more computation tasks and uses GitHub for accessing and submitting course materials and assignments.

On the theory side, we introduce the main principles of distributed systems for big data analytics, their design objectives, querying paradigms by using MapReduce and other computation models, general numerical computations using dataflow graphs, and querying data by using SQL-like application programming interfaces. We consider graph processing algorithms for querying graph properties and iterative computations on input graph data. We introduce the principles of stream processing, how to perform computations and execute queries over a sliding-window of input data stream elements. We study the principles of scalable machine learning algorithms, based on parallel implementations of gradient-descent style algorithms for minimizing a loss function, used for training regression and classification models. We also consider distributed MapReduce computations for training clustering models such as k-means and collaborative filtering models based on matrix factorization. We consider numerical computations using dataflow graphs, with a focus on learning deep neural networks for image classification and other classification tasks. Students are encouraged to work on computations and data relevant to their own interests.

On the practical side, we cover a variety of tools that are part of a modern data scientist’s toolkit, including distributed computing using Apache Hadoop and MapReduce style processing of big data sets, Apache Spark, application programming interfaces for querying structured and unstructured datasets such as Apache Hive, Spark SQL, and Google BigQuery, stream data processing, and deploying large-scale machine learning models. Students learn how to write programmes to define Spark jobs using the Python API and how to deploy a Spark job in a production environment. Students learn how to connect Spark data structures with a variety of external data sources, including key-value data stores, relational databases, and publish-subscribe messaging systems.

For the final project, students are asked to conduct a big data analytics tasks using the principles and technologies learned in class as well as to learn other related technologies not covered in course in a great length (e.g. working with Apache Cassandra or Microsoft Congitive Toolkit). The project report is typically in the form a Jupyter notebook and a working solution.

This course is an introduction to the fundamental concepts of distributed computing for big data for students and assumes no prior knowledge of these concepts.

The course involves 20 hours of lectures and 15 hours of computer workshops in the LT.

Some basic prior programming experience is expected. Prior experience with Python programming is desirable; for example, acquired through the compulsory courses of the MSc in Data Science program.

We use a wide range of tools, including Juypter notebooks, Apache Hadoop, Google Bigtable, Apache Hive, Apache Spark / PySpark (Python API for Spark), SQL APIs for querying datasets, Tensorflow library for dataflow programs, Docker, and various cloud computing services, e.g. provided by the Google Cloud Platform. Lectures and assignments are posted on Github. Students are expected to use Github also to submit problem sets and final exam.

Where appropriate, we use Jupyter notebooks for lab assignments, demonstrations, and the course notes themselves.

Project assignment (80%) and continuous assessment in weeks 4 and 7 (10% each). Students are expected to produce 10 problem sets in the LT.

In the first week, we provide an overview of basic concepts starting with a definition of big data, followed by an overview of structured, semi-structured and unstructured data types, including relational database tables, delimiter-separated file formats such as csv and tsv files, JSON and XML file formats. We then consider main properties of traditional relational database management systems, their support of transactions and ACID properties. This leads us to consider the need for the design of scalable systems, the concepts of horizontal and vertical scaling, and various computer system bottlenecks. We then go on to consider modern big data analytics systems and how they have evolved over time. We introduce various computation paradigms such as batch processing, interactive processing, stream processing, and lambda architecture. We discuss main developments such as MapReduce computation model and noSQL databases. The rest of the lecture discusses various computation tasks that led to the development of modern big data analytics systems, which are studied throughout the course.

*Readings*:

- Vogels, W., A Decade of Dynamo, Blog, All Things Distributed, 02 October 2017
- Hamilton, J., One Size Doesn’t Fit All, Blog, Perspectives, 2012

*Lab*: **Getting started**

- Command line interface and commands
- Cluster and bucket creation in a cloud
- Submitting a simple “Hello World” job to a cluster
- Running a Jupyter notebook on a cluster
- Use of Docker containers

In this week, we first consider the main design principles of distributed file systems, explaining the original Google File System and its refinements, as well as other distributed file systems such as Hadoop Distributed File System (HDFS). We then consider the main design principles of distributed key-value stores such as Amazon Dynamo and columnar data storage systems such as BigTable and Apache Cassandra.

*Readings*:

- White, T., Hadoop: The Definitive Guide, O’Reilly, 4th Edition, 2015
- Carpenter, J. and Hewitt, E., Cassandra: The Definitive Guide, 2nd Edition, O’Reilly, 2016
- Ghemawat, S., Gobioff, H. and Leung S.-T., The Google file system, SOSP 2003
- Shvachko, K. et al, The Hadoop Distributed File System, IEEE MSST 2010; see also html
- DeCandia, G. et al, Dynamo: Amazon’s Highly Available Key-value Store, SOSP 2007
- Chang, F. et al, Bigtable: A distributed storage system for structured data, OSDI 2006

*Further Readings*:

- GFS: Evolution on Fast-Forward, ACM Queue, Vol 7, No 7, August, 2009
- Apache Hadoop docs: HDFS Architecture
- Vogels, W., Amazon’s Dynamo, Blog, 2007
- Nishtala, R. et al, Scaling Memcache at Facebook, NSDI 2013
- Fitzpatrick, P., Distributed Caching with Memcached, 2004
- Lakshman, A. and Malik, K., A Decentralized Structured Storage System, LADIS 2009
- Calder, B. et al, Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency, SOSP 2011
- Huang, C. et al, Erasure Coding in Windows Azure Storage, USENIX 2012

*Lab*: **System installation and API practice**

- Go through Hadoop installation
- Basic file manipulation commands working with HDFS
- Reading and writing data from BigTable

In this lecture we explain the basic principles of distributed computation models that are in common use in the context of big data analytics systems. We start with explaining MapReduce computation model that is in widespread use for distributed processing of large datasets. We then move on to consider Pregel computation model, developed for iterative computations such as computing PageRank vector for a large-scale input graph. Finally, we consider the concept of a Resilient Distributed Dataset (RDD), a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner. This involves to consider the types of operations performed on RDDs and their execution on a distributed cluster of machines.

*Readings*:

- Karau, H., Konwinski, A., Wendell, P. and Zaharia, M., Learning Spark: Lightining-fast Data Analysis, O’Reilly, 2015
- Karau, H. and Warren, R., High Performance Spark: Best Practices for Scaling & Optimizing Apache Spark, O’Reilly, 2017
- Drabas, T. and Lee D., Learning PySpark, Packt, 2016
- Dean, J. and Ghemawat, S., Mapreduce: Simiplified Data Processing on Large Clusters, Communications of the ACM, Vol 51, No 1, January 2008; original OSDI 2004 paper
- Zaharia, M. et al, Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing, NSDI 2012

*Further Readings*:

- Apache Hadoop documentation
- Apache Hadoop documentation: MapReduce Tutorial
- Malewicz, G. et al, Pregel: A System for Large-Scale Graph Processing, SIGMOD 2010
- Spark programming guide
- Chambers, B. and Zaharia, M., Spark: The Definitive Guide, databricks, 2017
- Spark documentation: PySpark package

*Lab*: **MapReduce and resilient distributed datasets**

- Run a MapReduce job on Hadoop for word count on dblp data
- Hands-on experience with running operations on resilient distributed datasets in PySpark, such as map, flatMap, filter, distinct, sample, leftOuter and repartition, and actions such as take, collect, reduce, count, saveAsTextFile and foreach
- Run the word count example using resilient distributed datasets in PySpark

In this week we consider systems for big data analytics using structured query languages. We start with considering the main architectural principles of Apache Hive, a data warehouse solution running on top of Apache Hadoop. We consider data types and query language used by Hive. We then consider the main design principles of Dremel (BigQuery) and Spark SQL for querying data using structured query languages.

*Readings*:

- White, T., Hadoop: The Definitive Guide, O’Reilly, 4th Edition, 2015
- Karau, H., Konwinski, A., Wendell, P. and Zaharia, M., Learning Spark: Lightining-fast Data Analysis, Chapter 9 Spark SQL, O’Reilly, 2015
- Karau, H. and Warren, R., High Performance Spark: Best Practices for Scaling & Optimizing Apache Spark, O’Reilly, 2017
- Drabas, T. and Lee D., Learning PySpark, Chapter 3 DataFrames, Packt, 2016
- Rutherglen, J., Wampler, D., Capriolo, E., Programming Hive, 2nd Edition, O’Reilly, 2017
- Prokopp, C., The Free Hive Book

*Further Readings*:

- Apache Hive Tutorial
- Apache Hive Language Manual
- Thusoo, A. et al, Hive-A Petabyte Scale Data Warhouse Using Hadoop, ICDE 2010
- Spark SQL programming guide: Spark SQL, DataFrames and Datasets Guide, Apache Spark 2.2.0, 2017
- Armbrust, M., et al, Spark SQL: Relational Data Processing in Spark, ACM SIGMOD 2015
- Armbrust, M., Dataframes: Simple and Fast Analysis of Structured Data, Webinar, 2017
- Big Query SQL Reference: Standard and Legacy

*Lab*: **Hive and Spark SQL queries**

- Run Hive queries, basic standard SQL and Hive specific queries such as TRANSFORM, and MAP and REDUCE, queries
- Loading data from sources, including JSON, XML, weblogs using regular expressions
- Run queries in Spark SQL using dataframe API and Spark Session SQL API
- Data management using BigQuery via web interface and connector with Python

Note: assignment for grading to be given in this week

In this week we consider principles and systems for scalable processing of large-scale graph data. This includes queries such as evaluating node centralities (e.g. degree centrality), graph traversal or motif queries for finding structures in graph data (e.g. identifying friends-of-friends of a person who were born in London), and iterative algorithms on graph input data (e.g. computing PageRank node centralities). We discuss different data models for representation of graph data such as RDF, as well as query languages, including SPARQL, Gremlin, Cypher and openCyper that used in graph databases. We introduce the Bulk Synchronous Parallel computation model that underlies the design of modern computation platforms for iterative computing on input graph data.

*Readings*:

- Drabas, T. and Karau, H.,
*Learning PySpark*, Chapter 7: GraphFrames, Packt, 2017 - Spark GraphX: programming guide
- Gonzalez, J. E. et al, GraphX: Graph Processing in Distributed Dataflow Framework, OSDI 2014

*Further Resources*:

- Malewicz, G. et al, Pregel: A System for Large-Scale Graph Processing, ACM SIGMOD 2010; open source cousin: Apache Giraph
- Low, Y. et al, Distributed GraphLab: A Framework for Machine Learning and Data Mining in the Cloud, VLDB 2012
- Valiant, L. G., A Bridging Model for Parallel Computation, Communications of the ACM, Vol 3, No 8, August 1990.

*Lab*: **Analysis of StackExchange user-posts-topics relations**

- Importing StackExchange relations into graphframes
- Computing degree and PageRank node centralities
- Answering graph motif queries
- Breadth-first search and connected components

In this week we consider basic concepts of data stream processing systems. We explain various global aggregators, cumulative, and sliding-window stream data processing tasks. We introduce the concept of publish-subscribe systems, taking Apache Kafka as an example. We discuss the importance of fault tolerance in stream processing systems and discuss fault tolerance models such as *execute exactly-once*. In this context, we discuss the guarantees provided by Zookeper, an open source server which enables highly reliable distributed coordination.

*Readings*:

*Further Resources*:

- Zaharia, M. et al, Discretized Streams: Fault-Tolerant Streaming Computation at Scale, SOSP 2013
- Apache Kafka documentation

*Lab*: **Twitter feed processing**

- Import Twitter feed into Kafka as a topic
- Integrate Spark with Kafka
- Track heavy-hitter topics
- Visualize topic trends

In this week we introduce basic concepts of distributed machine learning algorithms for regression and classification tasks. We discuss batch optimization methods for model parameter estimation by using gradient descent methods and its variations such as BFGS and L-BFGS. We also cover online optimisation methods such as stochastic gradient descent (SGD), parallel SGD and mini-batch SGD methods. We discuss model and data paralellization models.

*Readings*:

- Bottou, L. and Le Cun, Y., Large Scale Online Learning, NIPS 2003
- Drabas, T. and Lee, D.
*Learning PySpark*, Chapter 5: Intoducing MLib, Packt, 2017 - Spark programming guide MLib: RDD-based API

*Further Resources*:

- Li, M., Scaling Distributed Machine Learning with the Parameter Server, OSDI 2014
- Numerical optimization: understanding L-BFGS, blog, December 2, 2014
- Machine Learning Glossary

*Lab*: **Spark ML package and churn prediction task**

- Basic features of Spark MLib package
- Churn prediction:
- Import the Orange Telecoms Churn dataset
- Train a logistic regression model, compute predictions
- Train a decision tree model, compute predictions
- Evaluate and compare the two models

Note: assignment for grading to be given in this week

In this week we continue by considering distributed machine learning algorithms for clustering and collaborative filtering tasks. We discuss a MapReduce algorithm for k-means clustering problem, as well as an iterative algorithm for collaborative filtering tasks. We consider Spark API approaches provided by MLib and ML packages. In the latter context, we introduce the concept of a pipeline that consists of a dataflow passing through transformer and estimator operators.

*Readings*:

- Murphy, K. P.,
*Machine Learning: A Probabilistic Perspective*, k-means, Section 11.4.2.5, The MIT Press, 2012 - Drabas, T. and Lee, D.
*Learning PySpark*, Chapter 5: Intoducing MLib and Chapter 6: Introducting the ML Package, Packt, 2017 - Spark programming guide MLib: RDD-based API

*Further Resources*:

- Moore, A., K-means and Hierarchical Clustering Tutorial
- Wang, Q., Spark machine learning pipeline by example, August 31, 2016
- Zadeh, R. et al, Matrix Computations and Optimizations in Spark, KDD 2016

*Lab*: **Clustering and movies recommendation**

- k-means clustering using a Mapreduce algorithm
- Movie recommendations using MovieLens data and training a collaborative filtering model using Alternating Least Square (ALS) algorithm

Guest lecture: “Democratizing AI,” Marc Cohen, Software Engineer, Google.

The lecture will provide an overview of cloud services provided by Google including TensorFlow, vision API, translation API, video intelligence API, cloud ML engine, and managed TensorFlow at scale.

*Lab*: **Using APIs for solving AI tasks**

- g.co/codelabs assignment

In our last lecture, we introduce basic concepts of performing numerical computations using data flow graphs. In such settings, the graph nodes represent mathematical operations, while the graph edges represent multidimensional data arrays that flow between them. We explain the architecture of Tensorflow, an open source library for numerical computations using data flow graphs. We go over the the use case of learning a deep neural network, taking the basic architecture of a feedforward deep neural network.

*Readings*:

- TensorFlow API docs
- Abadi, M. et al, TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems, OSDI 2016
- Drabas, T. and Lee, D.,
*Learning PySpark*, Chapter 8: TensorFrames, Packt, 2017 - TensorFrames gitHub page

*Further Resources*:

- Tensorflow gitHub page
- Abadi et al, A computational model for TensorFlow: an introduction, MAPL 2017
- Dean, J., Large-Scale Deep Learning with Tensorflow, ScaledML 2016
- Yu, D. et al, An Introduction to Computational Networks and the Computational Network Toolkit, Microsoft Research Technical Report, 2014

*Lab*: **Distributed Tensorflow**

- Import a dataset of labeled images
- Specify a feedforward deep neural network model
- Train the model
- Evaluate the classification accuracy of the trained model