tailieunhanh - Mining of Massive Datasets - Anand Rajaraman Jure LeskovecStanford Univ.Jeffrey D. UllmanStanford
This book evolved from material developed over several years by Anand Raja-raman and Jeff Ullman for a one-quarter course at Stanford. The courseCS345A, titled “Web Mining,” was designed as an advanced graduate course,although it has become accessible and interesting to advanced the Book Is AboutAt the highest level of description, this book is about data mining. However,it focuses on data mining of very large amounts of data, that is, data so largeit does not fit in main memory. Because of the emphasis on size, many of ourexamples are about the Web or data deri. | Mining of Massive Datasets Anand Rajaraman Jure Leskovec Stanford Univ. Jeffrey D. Ullman Stanford Univ. Copyright 2010 2011 2012 Anand Rajaraman Jure Leskovec and Jeffrey D. Ullman ii Preface This book evolved from material developed over several years by Anand Raja-raman and Jeff Ullman for a one-quarter course at Stanford. The course CS345A titled Web Mining was designed as an advanced graduate course although it has become accessible and interesting to advanced undergraduates. When Jure Leskovec joined the Stanford faculty we reorganized the material considerably. He introduced a new course CS224W on network analysis and added material to CS345A which was renumbered CS246. The three authors also introduced a large-scale data-mining project course CS341. The book now contains material taught in all three courses. What the Book Is About At the highest level of description this book is about data mining. However it focuses on data mining of very large amounts of data that is data so large it does not fit in main memory. Because of the emphasis on size many of our examples are about the Web or data derived from the Web. Further the book takes an algorithmic point of view data mining is about applying algorithms to data rather than using data to train a machine-learning engine of some sort. The principal topics covered are 1. Distributed file systems and map-reduce as a tool for creating parallel algorithms that succeed on very large amounts of data. 2. Similarity search including the key techniques of minhashing and localitysensitive hashing. 3. Data-stream processing and specialized algorithms for dealing with data that arrives so fast it must be processed immediately or lost. 4. The technology of search engines including Google s PageRank link-spam detection and the hubs-and-authorities approach. 5. Frequent-itemset mining including association rules market-baskets the A-Priori Algorithm and its improvements. 6. Algorithms for clustering very large .
đang nạp các trang xem trước