Browse Results

Showing 48,351 through 48,375 of 55,447 results

Statistical Language and Speech Processing: 4th International Conference, SLSP 2016, Pilsen, Czech Republic, October 11-12, 2016, Proceedings (Lecture Notes in Computer Science #9918)

by Pavel Král Carlos Martín-Vide

This book constitutes the refereed proceedings of the 4th International Conference on Statistical Language and Speech Processing, SLSP 2016, held in Pilsen, Czech Republic, in October 2016. The 11 full papers presented together with two invited talks were carefully reviewed and selected from 38 submissions. The papers cover topics such as anaphora and coreference resolution; authorship identification, plagiarism and spam filtering; computer-aided translation; corpora and language resources; data mining and semantic web; information extraction; information retrieval; knowledge representation and ontologies; lexicons and dictionaries; machine translation; multimodal technologies; natural language understanding; neural representation of speech and language; opinion mining and sentiment analysis; parsing; part-of-speech tagging; question and answering systems; semantic role labeling; speaker identification and verification; speech and language generation; speech recognition; speech synthesis; speech transcription; speech correction; spoken dialogue systems; term extraction; text categorization; test summarization; user modeling.

Statistical Language and Speech Processing: 5th International Conference, SLSP 2017, Le Mans, France, October 23–25, 2017, Proceedings (Lecture Notes in Computer Science #10583)

by Nathalie Camelin, Yannick Estève and Carlos Martín-Vide

This book constitutes the refereed proceedings of the 5th International Conference on Statistical Language and Speech Processing, SLSP 2017, held in Le Mans, France, in October 2017. The 21 full papers presented were carefully reviewed and selected from 39 submissions. The papers cover topics such as anaphora and conference resolution; authorship identification, plagiarism and spam filtering; computer-aided translation; corpora and language resources; data mining and semanticweb; information extraction; information retrieval; knowledge representation and ontologies; lexicons and dictionaries; machine translation; multimodal technologies; natural language understanding; neural representation of speech and language; opinion mining and sentiment analysis; parsing; part-of-speech tagging; question and answering systems; semantic role labeling; speaker identification and verification; speech and language generation; speech recognition; speech synthesis; speech transcription; speech correction; spoken dialogue systems; term extraction; text categorization; test summarization; user modeling. They are organized in the following sections: language and information extraction; post-processing and applications of automatic transcriptions; speech paralinguistics and synthesis; speech recognition: modeling and resources.

Statistical Learning and Modeling in Data Analysis: Methods and Applications (Studies in Classification, Data Analysis, and Knowledge Organization)

by Simona Balzano Giovanni C. Porzio Renato Salvatore Domenico Vistocco Maurizio Vichi

The contributions gathered in this book focus on modern methods for statistical learning and modeling in data analysis and present a series of engaging real-world applications. The book covers numerous research topics, ranging from statistical inference and modeling to clustering and factorial methods, from directional data analysis to time series analysis and small area estimation. The applications reflect new analyses in a variety of fields, including medicine, finance, engineering, marketing and cyber risk.The book gathers selected and peer-reviewed contributions presented at the 12th Scientific Meeting of the Classification and Data Analysis Group of the Italian Statistical Society (CLADAG 2019), held in Cassino, Italy, on September 11–13, 2019. CLADAG promotes advanced methodological research in multivariate statistics with a special focus on data analysis and classification, and supports the exchange and dissemination of ideas, methodological concepts, numerical methods, algorithms, and computational and applied results. This book, true to CLADAG’s goals, is intended for researchers and practitioners who are interested in the latest developments and applications in the field of data analysis and classification.

Statistical Learning for Big Dependent Data (Wiley Series in Probability and Statistics)

by Daniel Peña Ruey S. Tsay

Master advanced topics in the analysis of large, dynamically dependent datasets with this insightful resource Statistical Learning with Big Dependent Data delivers a comprehensive presentation of the statistical and machine learning methods useful for analyzing and forecasting large and dynamically dependent data sets. The book presents automatic procedures for modelling and forecasting large sets of time series data. Beginning with some visualization tools, the book discusses procedures and methods for finding outliers, clusters, and other types of heterogeneity in big dependent data. It then introduces various dimension reduction methods, including regularization and factor models such as regularized Lasso in the presence of dynamical dependence and dynamic factor models. The book also covers other forecasting procedures, including index models, partial least squares, boosting, and now-casting. It further presents machine-learning methods, including neural network, deep learning, classification and regression trees and random forests. Finally, procedures for modelling and forecasting spatio-temporal dependent data are also presented. Throughout the book, the advantages and disadvantages of the methods discussed are given. The book uses real-world examples to demonstrate applications, including use of many R packages. Finally, an R package associated with the book is available to assist readers in reproducing the analyses of examples and to facilitate real applications. Analysis of Big Dependent Data includes a wide variety of topics for modeling and understanding big dependent data, like: New ways to plot large sets of time series An automatic procedure to build univariate ARMA models for individual components of a large data set Powerful outlier detection procedures for large sets of related time series New methods for finding the number of clusters of time series and discrimination methods , including vector support machines, for time series Broad coverage of dynamic factor models including new representations and estimation methods for generalized dynamic factor models Discussion on the usefulness of lasso with time series and an evaluation of several machine learning procedure for forecasting large sets of time series Forecasting large sets of time series with exogenous variables, including discussions of index models, partial least squares, and boosting. Introduction of modern procedures for modeling and forecasting spatio-temporal data Perfect for PhD students and researchers in business, economics, engineering, and science: Statistical Learning with Big Dependent Data also belongs to the bookshelves of practitioners in these fields who hope to improve their understanding of statistical and machine learning methods for analyzing and forecasting big dependent data.

Statistical Learning for Big Dependent Data (Wiley Series in Probability and Statistics)

by Daniel Peña Ruey S. Tsay

Master advanced topics in the analysis of large, dynamically dependent datasets with this insightful resource Statistical Learning with Big Dependent Data delivers a comprehensive presentation of the statistical and machine learning methods useful for analyzing and forecasting large and dynamically dependent data sets. The book presents automatic procedures for modelling and forecasting large sets of time series data. Beginning with some visualization tools, the book discusses procedures and methods for finding outliers, clusters, and other types of heterogeneity in big dependent data. It then introduces various dimension reduction methods, including regularization and factor models such as regularized Lasso in the presence of dynamical dependence and dynamic factor models. The book also covers other forecasting procedures, including index models, partial least squares, boosting, and now-casting. It further presents machine-learning methods, including neural network, deep learning, classification and regression trees and random forests. Finally, procedures for modelling and forecasting spatio-temporal dependent data are also presented. Throughout the book, the advantages and disadvantages of the methods discussed are given. The book uses real-world examples to demonstrate applications, including use of many R packages. Finally, an R package associated with the book is available to assist readers in reproducing the analyses of examples and to facilitate real applications. Analysis of Big Dependent Data includes a wide variety of topics for modeling and understanding big dependent data, like: New ways to plot large sets of time series An automatic procedure to build univariate ARMA models for individual components of a large data set Powerful outlier detection procedures for large sets of related time series New methods for finding the number of clusters of time series and discrimination methods , including vector support machines, for time series Broad coverage of dynamic factor models including new representations and estimation methods for generalized dynamic factor models Discussion on the usefulness of lasso with time series and an evaluation of several machine learning procedure for forecasting large sets of time series Forecasting large sets of time series with exogenous variables, including discussions of index models, partial least squares, and boosting. Introduction of modern procedures for modeling and forecasting spatio-temporal data Perfect for PhD students and researchers in business, economics, engineering, and science: Statistical Learning with Big Dependent Data also belongs to the bookshelves of practitioners in these fields who hope to improve their understanding of statistical and machine learning methods for analyzing and forecasting big dependent data.

Statistical Learning from a Regression Perspective (Springer Texts in Statistics)

by Richard A. Berk

This textbook considers statistical learning applications when interest centers on the conditional distribution of the response variable, given a set of predictors, and when it is important to characterize how the predictors are related to the response. This fully revised new edition includes important developments over the past 8 years. Consistent with modern data analytics, it emphasizes that a proper statistical learning data analysis derives from sound data collection, intelligent data management, appropriate statistical procedures, and an accessible interpretation of results. As in the first edition, a unifying theme is supervised learning that can be treated as a form of regression analysis. Key concepts and procedures are illustrated with real applications, especially those with practical implications. The material is written for upper undergraduate level and graduate students in the social and life sciences and for researchers who want to apply statistical learning procedures to scientific and policy problems. The author uses this book in a course on modern regression for the social, behavioral, and biological sciences. All of the analyses included are done in R with code routinely provided.

Statistical Learning from a Regression Perspective (Springer Texts in Statistics)

by Richard A. Berk

This textbook considers statistical learning applications when interest centers on the conditional distribution of a response variable, given a set of predictors, and in the absence of a credible model that can be specified before the data analysis begins. Consistent with modern data analytics, it emphasizes that a proper statistical learning data analysis depends in an integrated fashion on sound data collection, intelligent data management, appropriate statistical procedures, and an accessible interpretation of results. The unifying theme is that supervised learning properly can be seen as a form of regression analysis. Key concepts and procedures are illustrated with a large number of real applications and their associated code in R, with an eye toward practical implications. The growing integration of computer science and statistics is well represented including the occasional, but salient, tensions that result. Throughout, there are links to the big picture. The third edition considers significant advances in recent years, among which are: the development of overarching, conceptual frameworks for statistical learning;the impact of “big data” on statistical learning;the nature and consequences of post-model selection statistical inference;deep learning in various forms;the special challenges to statistical inference posed by statistical learning;the fundamental connections between data collection and data analysis;interdisciplinary ethical and political issues surrounding the application of algorithmic methods in a wide variety of fields, each linked to concerns about transparency, fairness, and accuracy. This edition features new sections on accuracy, transparency, and fairness, as well as a new chapter on deep learning. Precursors to deep learning get an expanded treatment. The connections between fitting and forecasting are considered in greater depth. Discussion of the estimation targets for algorithmic methods is revised and expanded throughout to reflect the latest research. Resampling procedures are emphasized. The material is written for upper undergraduate and graduate students in the social, psychological and life sciences and for researchers who want to apply statistical learning procedures to scientific and policy problems.

Statistical Learning from a Regression Perspective (Springer Series in Statistics)

by Richard A. Berk

Statistical Learning from a Regression Perspective considers statistical learning applications when interest centers on the conditional distribution of the response variable, given a set of predictors, and when it is important to characterize how the predictors are related to the response. As a first approximation, this is can be seen as an extension of nonparametric regression. Among the statistical learning procedures examined are bagging, random forests, boosting, and support vector machines. Response variables may be quantitative or categorical. Real applications are emphasized, especially those with practical implications. One important theme is the need to explicitly take into account asymmetric costs in the fitting process. For example, in some situations false positives may be far less costly than false negatives. Another important theme is to not automatically cede modeling decisions to a fitting algorithm. In many settings, subject-matter knowledge should trump formal fitting criteria. Yet another important theme is to appreciate the limitation of one’s data and not apply statistical learning procedures that require more than the data can provide. The material is written for graduate students in the social and life sciences and for researchers who want to apply statistical learning procedures to scientific and policy problems. Intuitive explanations and visual representations are prominent. All of the analyses included are done in R.

Statistical Learning in Genetics: An Introduction Using R (Statistics for Biology and Health)

by Daniel Sorensen

This book provides an introduction to computer-based methods for the analysis of genomic data. Breakthroughs in molecular and computational biology have contributed to the emergence of vast data sets, where millions of genetic markers for each individual are coupled with medical records, generating an unparalleled resource for linking human genetic variation to human biology and disease. Similar developments have taken place in animal and plant breeding, where genetic marker information is combined with production traits. An important task for the statistical geneticist is to adapt, construct and implement models that can extract information from these large-scale data. An initial step is to understand the methodology that underlies the probability models and to learn the modern computer-intensive methods required for fitting these models. The objective of this book, suitable for readers who wish to develop analytic skills to perform genomic research, is to provide guidance to take this first step.This book is addressed to numerate biologists who typically lack the formal mathematical background of the professional statistician. For this reason, considerably more detail in explanations and derivations is offered. It is written in a concise style and examples are used profusely. A large proportion of the examples involve programming with the open-source package R. The R code needed to solve the exercises is provided. The MarkDown interface allows the students to implement the code on their own computer, contributing to a better understanding of the underlying theory.Part I presents methods of inference based on likelihood and Bayesian methods, including computational techniques for fitting likelihood and Bayesian models. Part II discusses prediction for continuous and binary data using both frequentist and Bayesian approaches. Some of the models used for prediction are also used for gene discovery. The challenge is to find promising genes without incurring a large proportion of false positive results. Therefore, Part II includes a detour on False Discovery Rate assuming frequentist and Bayesian perspectives. The last chapter of Part II provides an overview of a selected number of non-parametric methods. Part III consists of exercises and their solutions.Daniel Sorensen holds PhD and DSc degrees from the University of Edinburgh and is an elected Fellow of the American Statistical Association. He was professor of Statistical Genetics at Aarhus University where, at present, he is professor emeritus.

Statistical Learning of Complex Data (Studies in Classification, Data Analysis, and Knowledge Organization)

by Francesca Greselin Laura Deldossi Luca Bagnato Maurizio Vichi

This book of peer-reviewed contributions presents the latest findings in classification, statistical learning, data analysis and related areas, including supervised and unsupervised classification, clustering, statistical analysis of mixed-type data, big data analysis, statistical modeling, graphical models and social networks. It covers both methodological aspects as well as applications to a wide range of fields such as economics, architecture, medicine, data management, consumer behavior and the gender gap. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field of data analysis and classification. It gathers selected and peer-reviewed contributions presented at the 11th Scientific Meeting of the Classification and Data Analysis Group of the Italian Statistical Society (CLADAG 2017), held in Milan, Italy, on September 13–15, 2017.

Statistical Learning Theory and Stochastic Optimization: Ecole d'Eté de Probabilités de Saint-Flour XXXI - 2001 (Lecture Notes in Mathematics #1851)

by Olivier Catoni

Statistical learning theory is aimed at analyzing complex data with necessarily approximate models. This book is intended for an audience with a graduate background in probability theory and statistics. It will be useful to any reader wondering why it may be a good idea, to use as is often done in practice a notoriously "wrong'' (i.e. over-simplified) model to predict, estimate or classify. This point of view takes its roots in three fields: information theory, statistical mechanics, and PAC-Bayesian theorems. Results on the large deviations of trajectories of Markov chains with rare transitions are also included. They are meant to provide a better understanding of stochastic optimization algorithms of common use in computing estimators. The author focuses on non-asymptotic bounds of the statistical risk, allowing one to choose adaptively between rich and structured families of models and corresponding estimators. Two mathematical objects pervade the book: entropy and Gibbs measures. The goal is to show how to turn them into versatile and efficient technical tools, that will stimulate further studies and results.

Statistical Learning Tools for Electricity Load Forecasting (Statistics for Industry, Technology, and Engineering)

by Jean-Michel Poggi Anestis Antoniadis Jairo Cugliari Matteo Fasiolo Yannig Goude

This monograph explores a set of statistical and machine learning tools that can be effectively utilized for applied data analysis in the context of electricity load forecasting. Drawing on their substantial research and experience with forecasting electricity demand in industrial settings, the authors guide readers through several modern forecasting methods and tools from both industrial and applied perspectives – generalized additive models (GAMs), probabilistic GAMs, functional time series and wavelets, random forests, aggregation of experts, and mixed effects models. A collection of case studies based on sizable high-resolution datasets, together with relevant R packages, then illustrate the implementation of these techniques. Five real datasets at three different levels of aggregation (nation-wide, region-wide, or individual) from four different countries (UK, France, Ireland, and the USA) are utilized to study five problems: short-term point-wise forecasting, selection of relevant variables for prediction, construction of prediction bands, peak demand prediction, and use of individual consumer data. This text is intended for practitioners, researchers, and post-graduate students working on electricity load forecasting; it may also be of interest to applied academics or scientists wanting to learn about cutting-edge forecasting tools for application in other areas. Readers are assumed to be familiar with standard statistical concepts such as random variables, probability density functions, and expected values, and to possess some minimal modeling experience.

Statistical Learning Using Neural Networks: A Guide for Statisticians and Data Scientists with Python

by Basilio de Braganca Pereira Calyampudi Radhakrishna Rao Fabio Borges de Oliveira

Statistical Learning using Neural Networks: A Guide for Statisticians and Data Scientists with Python introduces artificial neural networks starting from the basics and increasingly demanding more effort from readers, who can learn the theory and its applications in statistical methods with concrete Python code examples. It presents a wide range of widely used statistical methodologies, applied in several research areas with Python code examples, which are available online. It is suitable for scientists and developers as well as graduate students. Key Features: Discusses applications in several research areas Covers a wide range of widely used statistical methodologies Includes Python code examples Gives numerous neural network models This book covers fundamental concepts on Neural Networks including Multivariate Statistics Neural Networks, Regression Neural Network Models, Survival Analysis Networks, Time Series Forecasting Networks, Control Chart Networks, and Statistical Inference Results. This book is suitable for both teaching and research. It introduces neural networks and is a guide for outsiders of academia working in data mining and artificial intelligence (AI). This book brings together data analysis from statistics to computer science using neural networks.

Statistical Learning Using Neural Networks: A Guide for Statisticians and Data Scientists with Python

by Basilio de Braganca Pereira Calyampudi Radhakrishna Rao Fabio Borges de Oliveira

Statistical Learning using Neural Networks: A Guide for Statisticians and Data Scientists with Python introduces artificial neural networks starting from the basics and increasingly demanding more effort from readers, who can learn the theory and its applications in statistical methods with concrete Python code examples. It presents a wide range of widely used statistical methodologies, applied in several research areas with Python code examples, which are available online. It is suitable for scientists and developers as well as graduate students. Key Features: Discusses applications in several research areas Covers a wide range of widely used statistical methodologies Includes Python code examples Gives numerous neural network models This book covers fundamental concepts on Neural Networks including Multivariate Statistics Neural Networks, Regression Neural Network Models, Survival Analysis Networks, Time Series Forecasting Networks, Control Chart Networks, and Statistical Inference Results. This book is suitable for both teaching and research. It introduces neural networks and is a guide for outsiders of academia working in data mining and artificial intelligence (AI). This book brings together data analysis from statistics to computer science using neural networks.

Statistical Learning with Math and Python: 100 Exercises for Building Logic

by Joe Suzuki

The most crucial ability for machine learning and data science is mathematical logic for grasping their essence rather than knowledge and experience. This textbook approaches the essence of machine learning and data science by considering math problems and building Python programs. As the preliminary part, Chapter 1 provides a concise introduction to linear algebra, which will help novices read further to the following main chapters. Those succeeding chapters present essential topics in statistical learning: linear regression, classification, resampling, information criteria, regularization, nonlinear regression, decision trees, support vector machines, and unsupervised learning. Each chapter mathematically formulates and solves machine learning problems and builds the programs. The body of a chapter is accompanied by proofs and programs in an appendix, with exercises at the end of the chapter. Because the book is carefully organized to provide the solutions to the exercises in each chapter, readers can solve the total of 100 exercises by simply following the contents of each chapter. This textbook is suitable for an undergraduate or graduate course consisting of about 12 lectures. Written in an easy-to-follow and self-contained style, this book will also be perfect material for independent learning.

Statistical Learning with Sparsity: The Lasso and Generalizations

by Trevor Hastie Robert Tibshirani Martin Wainwright

Discover New Methods for Dealing with High-Dimensional DataA sparse statistical model has only a small number of nonzero parameters or weights; therefore, it is much easier to estimate and interpret than a dense model. Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underl

Statistical Literacy: A Beginner′s Guide

by Rhys Christopher Jones

In an increasingly data-centric world, we all need to know how to read and interpret statistics. But where do we begin? This book breaks statistical terms and concepts down in a clear, straightforward way. From understanding what data are telling you to exploring the value of good storytelling with numbers, it equips you with the information and skills you need to become statistically literate. It also: Dispels misconceptions about the nature of statistics to help you avoid common traps. Helps you put your learning into practice with over 60 Tasks and Develop Your Skills activities. Draws on real-world research to demonstrate the messiness of data – and show you a path through it. Approachable and down to earth, this guide is aimed at undergraduates across the social sciences, psychology, business and beyond who want to engage confidently with quantitative methods or statistics. It forms a reassuring aid for anyone looking to understand the foundations of statistics before their course advances, or as a refresher on key content.

Statistical Literacy: A Beginner′s Guide

by Rhys Christopher Jones

In an increasingly data-centric world, we all need to know how to read and interpret statistics. But where do we begin? This book breaks statistical terms and concepts down in a clear, straightforward way. From understanding what data are telling you to exploring the value of good storytelling with numbers, it equips you with the information and skills you need to become statistically literate. It also: Dispels misconceptions about the nature of statistics to help you avoid common traps. Helps you put your learning into practice with over 60 Tasks and Develop Your Skills activities. Draws on real-world research to demonstrate the messiness of data – and show you a path through it. Approachable and down to earth, this guide is aimed at undergraduates across the social sciences, psychology, business and beyond who want to engage confidently with quantitative methods or statistics. It forms a reassuring aid for anyone looking to understand the foundations of statistics before their course advances, or as a refresher on key content.

Statistical Literacy: A Beginner′s Guide

by Rhys Christopher Jones

In an increasingly data-centric world, we all need to know how to read and interpret statistics. But where do we begin? This book breaks statistical terms and concepts down in a clear, straightforward way. From understanding what data are telling you to exploring the value of good storytelling with numbers, it equips you with the information and skills you need to become statistically literate. It also: Dispels misconceptions about the nature of statistics to help you avoid common traps. Helps you put your learning into practice with over 60 Tasks and Develop Your Skills activities. Draws on real-world research to demonstrate the messiness of data – and show you a path through it. Approachable and down to earth, this guide is aimed at undergraduates across the social sciences, psychology, business and beyond who want to engage confidently with quantitative methods or statistics. It forms a reassuring aid for anyone looking to understand the foundations of statistics before their course advances, or as a refresher on key content.

Statistical Literacy at School: Growth and Goals (Studies in Mathematical Thinking and Learning Series)

by Jane M. Watson

This book reveals the development of students' understanding of statistical literacy. It provides a way to "see" student thinking and gives readers a deeper sense of how students think about important statistical topics. Intended as a complement to curriculum documents and textbook series, it is consistent with the current principles and standards of the National Council of Teachers of Mathematics. The term "statistical literacy" is used to emphasize that the purpose of the school curriculum should not be to turn out statisticians but to prepare statistically literate school graduates who are prepared to participate in social decision making. Based on ten years of research--with reference to other significant research as appropriate--the book looks at students' thinking in relation to tasks based on sampling, graphical representations, averages, chance, beginning inference, and variation, which are essential to later work in formal statistics. For those students who do not proceed to formal study, as well as those who do, these concepts provide a basis for decision making or questioning when presented with claims based on data in societal settings. Statistical Literacy at School: Growth and Goals:*establishes an overall framework for statistical literacy in terms of both the links to specific school curricula and the wider appreciation of contexts within which chance and data-handling ideas are applied;*demonstrates, within this framework, that there are many connections among specific ideas and constructs;*provides tasks, adaptable for classroom or assessment use, that are appropriate for the goals of statistical literacy; *presents extensive examples of student performance on the tasks, illustrating hierarchies of achievement, to assist in monitoring gains and meeting the goals of statistical literacy; and*includes a summary of analysis of survey data that suggests a developmental hierarchy for students over the years of schooling with respect to the goal of statistical literacy.Statistical Literacy at School: Growth and Goals is directed to researchers, curriculum developers, professionals, and students in mathematics education as well those across the curriculum who are interested in students' cognitive development within the field; to teachers who want to focus on the concepts involved in statistical literacy without the use of formal statistical techniques; and to statisticians who are interested in the development of student understanding before students are exposed to the formal study of statistics.

Statistical Literacy at School: Growth and Goals (Studies in Mathematical Thinking and Learning Series)

by Jane M. Watson

This book reveals the development of students' understanding of statistical literacy. It provides a way to "see" student thinking and gives readers a deeper sense of how students think about important statistical topics. Intended as a complement to curriculum documents and textbook series, it is consistent with the current principles and standards of the National Council of Teachers of Mathematics. The term "statistical literacy" is used to emphasize that the purpose of the school curriculum should not be to turn out statisticians but to prepare statistically literate school graduates who are prepared to participate in social decision making. Based on ten years of research--with reference to other significant research as appropriate--the book looks at students' thinking in relation to tasks based on sampling, graphical representations, averages, chance, beginning inference, and variation, which are essential to later work in formal statistics. For those students who do not proceed to formal study, as well as those who do, these concepts provide a basis for decision making or questioning when presented with claims based on data in societal settings. Statistical Literacy at School: Growth and Goals:*establishes an overall framework for statistical literacy in terms of both the links to specific school curricula and the wider appreciation of contexts within which chance and data-handling ideas are applied;*demonstrates, within this framework, that there are many connections among specific ideas and constructs;*provides tasks, adaptable for classroom or assessment use, that are appropriate for the goals of statistical literacy; *presents extensive examples of student performance on the tasks, illustrating hierarchies of achievement, to assist in monitoring gains and meeting the goals of statistical literacy; and*includes a summary of analysis of survey data that suggests a developmental hierarchy for students over the years of schooling with respect to the goal of statistical literacy.Statistical Literacy at School: Growth and Goals is directed to researchers, curriculum developers, professionals, and students in mathematics education as well those across the curriculum who are interested in students' cognitive development within the field; to teachers who want to focus on the concepts involved in statistical literacy without the use of formal statistical techniques; and to statisticians who are interested in the development of student understanding before students are exposed to the formal study of statistics.

Statistical Literacy for Clinical Practitioners

by William H. Holmes William C. Rinaman

This textbook on statistics is written for students in medicine, epidemiology, and public health. It builds on the important role evidence-based medicine now plays in the clinical practice of physicians, physician assistants and allied health practitioners. By bringing research design and statistics to the fore, this book can integrate these skills into the curricula of professional programs. Students, particularly practitioners-in-training, will learn statistical skills that are required of today’s clinicians. Practice problems at the end of each chapter and downloadable data sets provided by the authors ensure readers get practical experience that they can then apply to their own work.

Statistical Machine Learning: A Unified Framework (Chapman & Hall/CRC Texts in Statistical Science)

by Richard Golden

The recent rapid growth in the variety and complexity of new machine learning architectures requires the development of improved methods for designing, analyzing, evaluating, and communicating machine learning technologies. Statistical Machine Learning: A Unified Framework provides students, engineers, and scientists with tools from mathematical statistics and nonlinear optimization theory to become experts in the field of machine learning. In particular, the material in this text directly supports the mathematical analysis and design of old, new, and not-yet-invented nonlinear high-dimensional machine learning algorithms. Features: Unified empirical risk minimization framework supports rigorous mathematical analyses of widely used supervised, unsupervised, and reinforcement machine learning algorithms Matrix calculus methods for supporting machine learning analysis and design applications Explicit conditions for ensuring convergence of adaptive, batch, minibatch, MCEM, and MCMC learning algorithms that minimize both unimodal and multimodal objective functions Explicit conditions for characterizing asymptotic properties of M-estimators and model selection criteria such as AIC and BIC in the presence of possible model misspecification This advanced text is suitable for graduate students or highly motivated undergraduate students in statistics, computer science, electrical engineering, and applied mathematics. The text is self-contained and only assumes knowledge of lower-division linear algebra and upper-division probability theory. Students, professional engineers, and multidisciplinary scientists possessing these minimal prerequisites will find this text challenging yet accessible. About the Author: Richard M. Golden (Ph.D., M.S.E.E., B.S.E.E.) is Professor of Cognitive Science and Participating Faculty Member in Electrical Engineering at the University of Texas at Dallas. Dr. Golden has published articles and given talks at scientific conferences on a wide range of topics in the fields of both statistics and machine learning over the past three decades. His long-term research interests include identifying conditions for the convergence of deterministic and stochastic machine learning algorithms and investigating estimation and inference in the presence of possibly misspecified probability models.

Statistical Machine Learning: A Unified Framework (Chapman & Hall/CRC Texts in Statistical Science)

by Richard Golden

The recent rapid growth in the variety and complexity of new machine learning architectures requires the development of improved methods for designing, analyzing, evaluating, and communicating machine learning technologies. Statistical Machine Learning: A Unified Framework provides students, engineers, and scientists with tools from mathematical statistics and nonlinear optimization theory to become experts in the field of machine learning. In particular, the material in this text directly supports the mathematical analysis and design of old, new, and not-yet-invented nonlinear high-dimensional machine learning algorithms. Features: Unified empirical risk minimization framework supports rigorous mathematical analyses of widely used supervised, unsupervised, and reinforcement machine learning algorithms Matrix calculus methods for supporting machine learning analysis and design applications Explicit conditions for ensuring convergence of adaptive, batch, minibatch, MCEM, and MCMC learning algorithms that minimize both unimodal and multimodal objective functions Explicit conditions for characterizing asymptotic properties of M-estimators and model selection criteria such as AIC and BIC in the presence of possible model misspecification This advanced text is suitable for graduate students or highly motivated undergraduate students in statistics, computer science, electrical engineering, and applied mathematics. The text is self-contained and only assumes knowledge of lower-division linear algebra and upper-division probability theory. Students, professional engineers, and multidisciplinary scientists possessing these minimal prerequisites will find this text challenging yet accessible. About the Author: Richard M. Golden (Ph.D., M.S.E.E., B.S.E.E.) is Professor of Cognitive Science and Participating Faculty Member in Electrical Engineering at the University of Texas at Dallas. Dr. Golden has published articles and given talks at scientific conferences on a wide range of topics in the fields of both statistics and machine learning over the past three decades. His long-term research interests include identifying conditions for the convergence of deterministic and stochastic machine learning algorithms and investigating estimation and inference in the presence of possibly misspecified probability models.

Statistical Matching: Theory and Practice (Wiley Series in Survey Methodology)

by Marcello D'Orazio Marco Di Zio Mauro Scanu

There is more statistical data produced in today’s modern society than ever before. This data is analysed and cross-referenced for innumerable reasons. However, many data sets have no shared element and are harder to combine and therefore obtain any meaningful inference from. Statistical matching allows just that; it is the art of combining information from different sources (particularly sample surveys) that contain no common unit. In response to modern influxes of data, it is an area of rapidly growing interest and complexity. Statistical Matching: Theory and Practice introduces the basics of statistical matching, before going on to offer a detailed, up-to-date overview of the methods used and an examination of their practical applications. Presents a unified framework for both theoretical and practical aspects of statistical matching. Provides a detailed description covering all the steps needed to perform statistical matching. Contains a critical overview of the available statistical matching methods. Discusses all the major issues in detail, such as the Conditional Independence Assumption and the assessment of uncertainty. Includes numerous examples and applications, enabling the reader to apply the methods in their own work. Features an appendix detailing algorithms written in the R language. Statistical Matching: Theory and Practice presents a comprehensive exploration of an increasingly important area. Ideal for researchers in national statistics institutes and applied statisticians, it will also prove to be an invaluable text for scientists and researchers from all disciplines engaged in the multivariate analysis of data collected from different sources.

Refine Search

Showing 48,351 through 48,375 of 55,447 results