lse-st446.github.io

LSE

ST446 Distributed Computing for Big Data

Lent Term 2018

Instructors

Teaching Assistant

Course Information

No lectures or classes will take place during School Reading Week 6.

Week Topic
1 Introduction to basic concepts and system architectures
2 Databases and data storage systems
3 Querying unstructured datasets
4 Querying structured datasets
5 Graph data processing
6 Reading Week
7 Stream data processing
8 Scalable machine learning I
9 Scalable machine learning II
10 Numerical computations using data flow graphs
11 Deployment of computation jobs in production

Course Description

This course will cover the principles of distributed systems for storing and processing big data. This will include the principles of storage systems, databases and data models that are in common use by on-premise data analytics platforms and cloud computing services. The course will cover the principles of computing over large datasets in distributed computing systems involving multi-core processors and cluster computing systems. Students will learn how to perform canonical distributed computing tasks in batch, streaming and graph processing computation models and how to run scalable machine learning algorithms for regression, classification, clustering and collaborative filtering tasks. This course uses a project-based learning approach where students gain hands-on experience with writing and running computer programmes through computer workshop exercises and project assignments. This will equip students with key skills and knowledge about modern computation platforms for processing big data. In particular, students will get hands-on experience with using Apache Spark, the fastest-growing general engine for processing big data that is used across different industries, and connecting Spark programmes with various databases and other systems. The students will work on weekly exercises and project assignments by using revision-control and group collaboration tools such as GitHub. Each student will develop code for solving one or more computation tasks on an input dataset, and will use GitHub for accessing and submitting course materials and assignments.

On the theory side, we will introduce principles of distributed databases, their design objectives, querying paradigms by using MapReduce style of computations, general numerical computations using dataflow graphs, and querying using SQL application programming interfaces. We will consider graph processing algorithms, for querying graph properties and iterative computations using input graph data. We will also introduce the principles of stream processing, how to perform computations and execute queries over a sliding-window of input data stream elements. We will study the principles of scalable machine learning algorithms that are based on parallel implementations of gradient descent style algorithms for minimizing a loss function, used for training regression and classification models. We will also consider distributed MapReduce based computations for training clustering models such as k-means and collaborative filtering models based on matrix factorization. We will consider numerical computations using dataflow graphs, with a focus on the use case of learning a deep neural network for image classification and other classification tasks. Students will be encouraged to work with computations and data relevant to their own interests.

On the practical side, we will cover a variety of tools that are part of a modern data scientist’s toolkit, including distributed computing using Apache Spark, Mapreduce style processing of big data sets, application programming interfaces for querying structured and unstructured datasets, stream data processing, and deploying large-scale machine learning models. You will learn how to write programmes to define Spark jobs using the Python API and how to deploy a Spark job in a production environment. You will learn how to connect Spark data structures with a variety of external data sources, including key-value databases, relational databases, and publish-subscribe messaging systems.

For the final project, we will ask you to develop and run a distributed computation for a given dataset, which you will be expected to implement in a PySpark Jupyter notebook.

Organization

This course is an introduction to the fundamental concepts of distributed computing for big data for students and assumes no prior knowledge of these concepts.

The course will involve 20 hours of lectures and 15 hours of computer workshops in the LT.

Prerequisites

Some basic prior programming experience is expected. Prior experience with Python programming is desirable; for example, acquired through the compulsory courses of the MSc in Data Science program.

Software

We will use some tools, notably Apache Spark general engine for computing over large distributed datasets, PySpark (Python API for Spark), SQL APIs for querying datasets, and Tensorflow library for dataflow programmes. Lectures and assignments will be posted on Github, Students are expected to use Github also to submit problem sets and final exam.

Where appropriate, we will use Jupyter notebooks for lab assignments, demonstrations, and the course notes themselves.

Assessment

Project assignment (80%) and continuous assessment in weeks 4 and 7 (10% each). Students will be expected to produce 10 problem sets in the LT.

Schedule


Week 1. Introduction to basic concepts and system architectures

In the first week, we will introduce the basic concepts and system architectures for big data processing. We will introduce the basic computing paradigms of batch, streaming, imperative, declarative, graph and machine learning data processing. We will discuss the main architectures of data storage systems based on key-value stores and other data models. We will discss the main design goals of such systems such as consistency, optimization for fast and reliable read or writes. We will then introduce the basic concepts of multi-node computing, such as cluster computing systems consisting of multiple machines, multi-core processors, distributed file systems, partitions of large data files into chunks or extents, distributed computing using master and worker nodes, resource allocation through job scheduling using resource managament systems such as YARN and Mesos.

Readings:

Further Resources:

Lab: Hands-on system administration tools


Week 2. Databases and data storage systems

In this week we will introduce different data models, datasets, databases and data storage paradigms used for distributed computing for big data. We will discuss key-value databases such as Cassandra and more complex relational database models such as Hive. We will discuss different data formats for storing data including csv, tsv, JSON, XML, Parquet, Hive tables, RDF, and Azure blobs. We will introduce the basic data structures used in Spark, including Resilient Distributed Dataset (RDD) and DataFrame. We will discuss the design objectives of various large-scale distributed storage systems such as consistency and fast reads or writes.

Readings:

Further Resources:

Lab: Import data to RDD and DataFrame


Week 3. Querying unstructured datasets

In this week we will study how to query large unstructured datasets. We will introduce the parallel computing paradigm of MapReduce. We will discuss how to manage and query datasets using Spark RDD. We will learn how to create an RDD, use transformations such as map, flatMap, filter, distinct, sample, leftOuterJoin, repartition as well as actions such as take, collect, reduce, count saveAsTextFile, and foreach. We will introduce the concept of lambda expressions and how to use regular expressions.

Readings:

Further Resources:

Lab: Using RDDs and MapReduce tasks


Week 4. Querying structured datasets

In this week we will consider how to query datasets that have a schema. We will introduce the concept of a dataframe and learn how to query data by using dataframe query API and how to execute SQL queries. We will discuss computational complexity of different standard queries and query optimization techniques. We will consider how to compute fast approximate query answers by using sampling techniques such as reservoir sampling, and data summarizations or sketches such as hyperloglog sketch for approximating the number of distinct elements in a multiset and count-min for frequency estimation.

Readings:

Further Resources:

Lab: SQL queries on table data


Week 7. Graph data processing

In this week we will consider principles and systems for scalable processing of large-scale graph data. This include queries such as evaluating node centralities (e.g. degree centrality), graph traversal or motif queries for finding structural in graph data (e.g. identifying friends-of-friends of a person who were born in London), and iterative algorithms on graph input data (e.g. computing PageRank node centralities). We will discuss different data models for representation of graph data such as RDF, as well as query languages, including SPARQL, Gremlin, Cypher and openCyper that used in graph databases. We will introduce the bulk synchronous parallel computation model that underlies the design of modern computation platforms for iterative computing on input graph data.

Readings:

Further Resources:

Lab: Analysis of StackExchange user-posts-topics relations


Week 6. Reading week


Week 7. Stream data processing

In this week we will consider the basic concepts of data stream processing systems. We will explain various global aggregators, cumulative, and sliding-window stream data processing tasks. We will introduce the concept of publish-subscribe systems and use Apache Kafka as an example. We will discuss the importance of fault tolerance in stream processing systems and discuss fault tolerance models such as execute exactly-once. In this context, we will discuss the guarantees provided by Zookeper, an open source server which enables highly reliable distributed coordination.

Readings:

Further Resources:

Lab: Twitter feed processing


Week 8. Scalable machine learning I

In this week we will introduce the basic concepts of distributed machine learning algorithms for regression and classification tasks. We will discuss batch optimization methods for model parameter estimation by using gradient descent methods and its variations such as BFGS and L-BFGS. We will also cover online optimisation methods such as stochastic gradient descent (SGD), parallel SGD and mini-batch SGD methods. We will discuss as model and data paralellisation models.

Readings:

Further Resources:

Lab: Churn prediction using MLib package


Week 9. Scalable machine learning II

In this week we will continue by considering distributed machine learning algorithms for clustering and collaborative filtering tasks. We will discuss a Mapreduce algorithm for k-means clustering problem, as well as an iterative algorithm for a collaborative filtering problem. We will consider Spark API approaches provided by MLib and ML packages. For the latter, we will introduce the concept of a pipeline that consists of a dataflow passing through transformer and estimator operators.

Readings:

Further Resources:

Lab: Clustering and movies recommendation


Week 10. Numerical computations using data flow graphs

We will introduce the basic concepts of performing numerical computations using data flow graphs. In such settings, the graph nodes represent mathematical operations, while the graph edges represent the multidimensional data arrays that flow between them. We will explain the architecture of Tensorflow, an open source library for numerical computations using data flow graphs. We will go over the the use case of learning a deep neural network, taking the basic architecture of a feedforward deep neural network.

Readings:

Further Resources:

Lab: Deep neural network learning


Week 11. Deployment of computation jobs in production

In the last week, we will discuss how to deploy large-scale computations in a production cluster system. This will cover setting up a cluster system, running jobs over varied number of machines in the cluster, and tracking their progress. We will consider simple Mapreduce jobs as well as machine learning algorithms for prediction tasks on a large-scale data.

Readings:

Lab: Click prediction using 1TB Criteo dataset