No lectures or classes will take place during School Reading Week 6.
This course covers main principles of architecture and application programming interfaces of distributed systems for storing and processing big data. The topics covered include distributed file systems, storage systems, and data models that are in common use in modern on-premise data analytics platforms and cloud computing services. The course covers principles of computing over large datasets in distributed computing systems involving multi-core processors and cluster computing systems. Students will learn how to perform canonical distributed computing tasks in batch, interactive and stream processing settings and how to run scalable machine learning algorithms for regression, classification, clustering and collaborative filtering tasks.
This course uses a project-based learning approach where students gain hands-on experience in using computing tools, services and writing software code through computer workshop exercises and project assignments. This equips students with key skills and knowledge about modern computing platforms for processing big data. In particular, students gain hands-on experience in working with Apache Hadoop, an open-source software framework for distributed storage and processing of large datasets using the MapReduce programming model and other services such as Apache Hive. They also gain hands-on experience in working with Apache Spark, a general engine for processing big data used across different industries, and connecting Spark with various data sources and other systems. The students learn how to run big data analytics tasks locally on their laptops as well as on distributed clusters of machines in the cloud. The students work on weekly exercises and project assignments by using GitHub, a popular revision-control and group collaboration tool. Each student develops code for solving one or more computation tasks and uses GitHub for accessing and submitting course materials and assignments.
On the theory side, we introduce the main principles of distributed systems for big data analytics, their design objectives, querying paradigms by using MapReduce and other computation models, general numerical computations using dataflow graphs, and querying data by using SQL-like application programming interfaces. We consider graph processing algorithms for querying graph properties and iterative computations on input graph data. We introduce the main principles of stream data processing, how to perform computations and execute queries over a sliding-window of input data stream elements. We study the principles of scalable machine learning algorithms, based on parallel implementation of gradient-descent based algorithms for minimizing a loss function, used for training classification and regression prediction models. We also consider distributed MapReduce computations for training clustering models such as k-means and collaborative filtering models based on matrix factorization. We consider numerical computations using dataflow graphs, with a focus on learning deep neural networks for image classification and other classification tasks. Students are encouraged to work on computing tasks and datasets that are of their own interest.
On the practical side, we cover a variety of tools that are part of a modern data scientist’s toolkit, including distributed computing using Apache Hadoop and MapReduce style processing of big datasets, Apache Spark, application programming interfaces for querying structured and unstructured datasets such as Apache Hive, Spark SQL, and Google BigQuery, stream data processing, and deploying large-scale machine learning models. Students learn how to write programs to define Spark jobs using the Python API and how to deploy a Spark job in a production environment. Students learn how to connect Spark data structures with a variety of external data sources, including key-value data stores, relational databases, and publish-subscribe messaging systems.
For the course project, students are asked to conduct a big data analytics task using the principles and technologies learned in class and possibly using principles and technologies not covered in the course in a great length. The project report is typically in the form a Jupyter notebook containing a working solution.
This course is an introduction to the fundamental concepts of distributed computing for big data for students and assumes no prior knowledge of these concepts.
The course involves 20 hours of lectures and 15 hours of computer workshops in the LT.
Basic programming knowledge is expected. Prior experience with Python programming is desirable; for example, acquired through a course of the MSc in Data Science programme.
We use a wide range of tools, including Juypter notebooks, Apache Hadoop, Google Bigtable, Apache Hive, Apache Spark / PySpark (Python API for Spark), SQL APIs for querying datasets, Tensorflow library for dataflow programs, Docker, and various cloud computing services, e.g. provided by the Google Cloud Platform. Lectures and assignments are posted on Github. Students are required to use GitHub to submit problem set solutions and final project report.
Where appropriate, we use Jupyter notebooks for lab assignments, demonstrations, and the course notes themselves.
Project assignment (80%) and continuous assessment in weeks 4 and 7 (10% each). Students are expected to produce 10 problem sets in the LT.
In the first week, we provide an overview of basic concepts starting with definition of big data, followed by an overview of structured, semi-structured and unstructured data types, including relational database tables, delimiter-separated file formats such as csv and tsv files, JSON and XML file formats. We then consider some main properties of traditional relational database management systems, their support of transactions and ACID properties. This leads us to consider the need for the design of scalable systems, the concepts of horizontal and vertical scaling, and various computer system bottlenecks. We then go on to consider modern big data analytics systems and how they have evolved over time. We introduce various computation paradigms such as batch processing, interactive processing, stream processing, and lambda architecture. We discuss main developments such as MapReduce computation model and noSQL databases. The rest of the lecture discusses various computation tasks that led to the development of modern big data analytics systems, which are studied throughout the course.
Readings:
Lab: Getting started
In this week, we first consider the main design principles of distributed file systems, explaining the original Google File System and its refinements, as well as other distributed file systems such as Hadoop Distributed File System (HDFS). We then consider the main design principles of distributed key-value stores such as Amazon Dynamo and columnar data storage systems such as BigTable and Apache Cassandra.
Readings:
Further Readings:
Lab: System installation and API practice
In this lecture we explain the basic principles of distributed computation models that are in common use in the context of big data analytics systems. We start with explaining MapReduce computation model that is in widespread use for distributed processing of large datasets. We then move on to consider Pregel computation model, developed for iterative computations such as computing PageRank vector for a large-scale input graph. Finally, we consider the concept of a Resilient Distributed Dataset (RDD), a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner. This involves to consider the types of operations performed on RDDs and their execution on a distributed cluster of machines.
Readings:
Further Readings:
Lab: MapReduce and resilient distributed datasets
In this week we consider systems for big data analytics using structured query languages. We start with considering the main architectural principles of Apache Hive, a data warehouse solution running on top of Apache Hadoop. We consider data types and query language used by Hive. We then consider the main design principles of Dremel (BigQuery) and Spark SQL for querying data using structured query languages.
Readings:
Further Readings:
Lab: Hive and Spark SQL queries
Note: assignment for grading to be given in this week
In this week we consider principles and systems for scalable processing of large-scale graph data. This includes queries such as evaluating node centralities (e.g. degree centrality), graph traversal or motif queries for finding structures in graph data (e.g. identifying friends-of-friends of a person who were born in London), and iterative algorithms on graph input data (e.g. computing PageRank node centralities). We discuss different data models for representation of graph data such as RDF, as well as query languages, including SPARQL, Gremlin, Cypher and openCyper that used in graph databases. We introduce the Bulk Synchronous Parallel computation model that underlies the design of modern computation platforms for iterative computing on input graph data.
Readings:
Further Resources:
Lab: Analysis of StackExchange user-posts-topics relations
In this week we consider basic concepts of data stream processing systems. We explain various global aggregators, cumulative, and sliding-window stream data processing tasks. We introduce the concept of publish-subscribe systems, taking Apache Kafka as an example. We discuss the importance of fault tolerance in stream processing systems and discuss fault tolerance models such as execute exactly-once. In this context, we discuss the guarantees provided by Zookeper, an open source server which enables highly reliable distributed coordination.
Readings:
Further Resources:
Lab: Twitter feed processing
In this week we introduce the basic concepts of scalable distributed machine learning algorithms for regression and classification tasks. We discuss batch optimization methods for model parameter estimation using iterative methods such as gradient descent, batch gradient descent, stochastic gradient descent, and quasi-Newton methods such as BFGS and L-BFGS.
Readings:
Further Resources:
Lab: Logistic regression using Spark MLlib
Note: assignment for grading to be given in this week
In this week we continue considering the topic of distributed machine learning algorithms with a focus on matrix factorization methods used for collaborative filtering (recommender systems) and topic modeling in text data.
We will discuss matrix completion problem and the algorithm known as alternating least squares and its distributed implementation. We will then continue with discussing distributed algorithms for singular value decomposition and its application for latent semantic analysis of text data. Finally, we will discuss a Bayesian model, known as Latent Dirichlet Allocation, for topic modeling for text data sources.
Readings:
Further Resources:
Lab: Collaborative filtering and topic modelling using Spark MLlib
In this lecture, we will introduce basic modeling used for training large-scale machine learning models, with the main application for training large-scale deep neural networks used for image recognition, speech processing and other tasks. We will cover basic distributed computing models, including data parallel and model parallel computation models. We will discuss scalability properties of different approaches, and explain their efficiency with respect to computation and communication complexity.
Lab: Introduction to deep learning using TensorFlow
In our last lecture, we continue discussing the topic of scaling up distributed machine learning algorithms but from a systems perspective, focusing on data structures, application programming interface and how to the theoretical principles are put in practice. We will discuss the concept of a dataflow graph where nodes of the graph represent operations (e.g. a matrix multiplication, a non-linear activation function) and edges represent flow of data between operations. We will primarily focus on one system that is based on the concept of dataflow graph computations, namely, TensorFlow, used for learning and inference for deep neural networks. We will explain the key system architectural concepts that underlie the design of TensorFlow as well as the application programming interface.
Readings:
Further Resources:
Lab: Distributed TensorFlow