tailieunhanh - Modeling Events with Cascades of Poisson Processes
The speed portion of the session is conducted as a series of high-speed efforts ranging from 200-300m in length. These efforts are aimed at improving your maximal speed and running economy. This improved running economy will filter down to slower speeds as well, such as your 10km race speed. Each speed repetition is conducted in a fresh state, to allow to you hold good posture, and achieve high speeds. While these efforts are done at a high speed, they should not be a maximal sprint; focus on being fast, tall and in control of your technique. . | Modeling Events with Cascades of Poisson Processes Aleksandr Simma EECS Department University of California Berkeley alex@ Michael I. Jordan Depts. of EECS and Statistics University of California Berkeley jordan@ Abstract We present a probabilistic model of events in continuous time in which each event triggers a Poisson process of successor events. The ensemble of observed events is thereby modeled as a superposition of Poisson processes. Efficient inference is feasible under this model with an EM algorithm. Moreover the EM algorithm can be implemented as a distributed algorithm permitting the model to be applied to very large datasets. We apply these techniques to the modeling of Twitter messages and the revision history of Wikipedia. 1 Introduction Real-life observations are often naturally represented by events bundles of features that occur at a particular moment in time. Events are generally nonindependent one event may cause others to occur. Given observations of events we wish to produce a probabilistic model that can be used not only for prediction and parameter estimation but also for identifying structure and relationships in the data generating process. We present an approach for building probabilistic models for collections of events in which each event induces a Poisson process of triggered events. This approach lends itself to efficient inference with an EM algorithm that can be distributed across computing clusters and thereby applied to massive datasets. We present two case studies the first involving a collection of Twitter messages on financial data and the second focusing on the revision history of Wikipedia. The latter example is a particularly large-scale problem the data consist of billions of potential interactions among events. Our approach is based on a continuous-time formal ism. There have been a relatively small number of machine learning papers focused on continuous-time graphical models examples include the .
đang nạp các trang xem trước