Browse Results

Showing 47,876 through 47,900 of 54,582 results

Statistical Reliability Engineering: Methods, Models and Applications (Springer Series in Reliability Engineering)

by Hoang Pham

This book presents the state-of-the-art methodology and detailed analytical models and methods used to assess the reliability of complex systems and related applications in statistical reliability engineering. It is a textbook based mainly on the author’s recent research and publications as well as experience of over 30 years in this field. The book covers a wide range of methods and models in reliability, and their applications, including: statistical methods and model selection for machine learning; models for maintenance and software reliability; statistical reliability estimation of complex systems; and statistical reliability analysis of k out of n systems, standby systems and repairable systems. Offering numerous examples and solved problems within each chapter, this comprehensive text provides an introduction to reliability engineering graduate students, a reference for data scientists and reliability engineers, and a thorough guide for researchers and instructors in the field.

Statistical Remedies for Medical Researchers (Springer Series in Pharmaceutical Statistics)

by Peter F. Thall

This book illustrates numerous statistical practices that are commonly used by medical researchers, but which have severe flaws that may not be obvious. For each example, it provides one or more alternative statistical methods that avoid misleading or incorrect inferences being made. The technical level is kept to a minimum to make the book accessible to non-statisticians. At the same time, since many of the examples describe methods used routinely by medical statisticians with formal statistical training, the book appeals to a broad readership in the medical research community.

Statistical Research Methods: A Guide for Non-Statisticians

by Roy Sabo Edward Boone

This textbook will help graduate students in non-statistics disciplines, advanced undergraduate researchers, and research faculty in the health sciences to learn, use and communicate results from many commonly used statistical methods. The material covered, and the manner in which it is presented, describe the entire data analysis process from hypothesis generation to writing the results in a manuscript. Chapters cover, among other topics: one and two-sample proportions, multi-category data, one and two-sample means, analysis of variance, and regression. Throughout the text, the authors explain statistical procedures and concepts using a non-statistical language. This accessible approach is complete with real-world examples and sample write-ups for the Methods and Results sections of scholarly papers. The text also allows for the concurrent use of the programming language R, which is an open-source program created, maintained and updated by the statistical community. R is freely available and easy to download.

Statistical Rethinking: A Bayesian Course with Examples in R and Stan (Chapman & Hall/CRC Texts in Statistical Science)

by Richard McElreath

Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds readers’ knowledge of and confidence in statistical modeling. Reflecting the need for even minor programming in today’s model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work. The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation. By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling. Web ResourceThe book is accompanied by an R package (rethinking) that is available on the author’s website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.

Statistical Rethinking: A Bayesian Course with Examples in R and Stan (Chapman & Hall/CRC Texts in Statistical Science #122)

by Richard McElreath

Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds readers’ knowledge of and confidence in statistical modeling. Reflecting the need for even minor programming in today’s model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work. The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation. By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling. Web ResourceThe book is accompanied by an R package (rethinking) that is available on the author’s website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.

Statistical Rethinking: A Bayesian Course with Examples in R and STAN (Chapman & Hall/CRC Texts in Statistical Science)

by Richard McElreath

Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds your knowledge of and confidence in making inferences from data. Reflecting the need for scripting in today's model-based statistics, the book pushes you to perform step-by-step calculations that are usually automated. This unique computational approach ensures that you understand enough of the details to make reasonable choices and interpretations in your own modeling work. The text presents causal inference and generalized linear multilevel models from a simple Bayesian perspective that builds on information theory and maximum entropy. The core material ranges from the basics of regression to advanced multilevel models. It also presents measurement error, missing data, and Gaussian process models for spatial and phylogenetic confounding. The second edition emphasizes the directed acyclic graph (DAG) approach to causal inference, integrating DAGs into many examples. The new edition also contains new material on the design of prior distributions, splines, ordered categorical predictors, social relations models, cross-validation, importance sampling, instrumental variables, and Hamiltonian Monte Carlo. It ends with an entirely new chapter that goes beyond generalized linear modeling, showing how domain-specific scientific models can be built into statistical analyses. Features Integrates working code into the main text Illustrates concepts through worked data analysis examples Emphasizes understanding assumptions and how assumptions are reflected in code Offers more detailed explanations of the mathematics in optional sections Presents examples of using the dagitty R package to analyze causal graphs Provides the rethinking R package on the author's website and on GitHub

Statistical Rethinking: A Bayesian Course with Examples in R and STAN (Chapman & Hall/CRC Texts in Statistical Science)

by Richard McElreath

Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds your knowledge of and confidence in making inferences from data. Reflecting the need for scripting in today's model-based statistics, the book pushes you to perform step-by-step calculations that are usually automated. This unique computational approach ensures that you understand enough of the details to make reasonable choices and interpretations in your own modeling work. The text presents causal inference and generalized linear multilevel models from a simple Bayesian perspective that builds on information theory and maximum entropy. The core material ranges from the basics of regression to advanced multilevel models. It also presents measurement error, missing data, and Gaussian process models for spatial and phylogenetic confounding. The second edition emphasizes the directed acyclic graph (DAG) approach to causal inference, integrating DAGs into many examples. The new edition also contains new material on the design of prior distributions, splines, ordered categorical predictors, social relations models, cross-validation, importance sampling, instrumental variables, and Hamiltonian Monte Carlo. It ends with an entirely new chapter that goes beyond generalized linear modeling, showing how domain-specific scientific models can be built into statistical analyses. Features Integrates working code into the main text Illustrates concepts through worked data analysis examples Emphasizes understanding assumptions and how assumptions are reflected in code Offers more detailed explanations of the mathematics in optional sections Presents examples of using the dagitty R package to analyze causal graphs Provides the rethinking R package on the author's website and on GitHub

Statistical Robust Design: An Industrial Perspective

by Magnus Arner

A UNIQUELY PRACTICAL APPROACH TO ROBUST DESIGN FROM A STATISTICAL AND ENGINEERING PERSPECTIVE Variation in environment, usage conditions, and the manufacturing process has long presented a challenge in product engineering, and reducing variation is universally recognized as a key to improving reliability and productivity. One key and cost-effective way to achieve this is by robust design – making the product as insensitive as possible to variation. With Design for Six Sigma training programs primarily in mind, the author of this book offers practical examples that will help to guide product engineers through every stage of experimental design: formulating problems, planning experiments, and analysing data. He discusses both physical and virtual techniques, and includes numerous exercises and solutions that make the book an ideal resource for teaching or self-study. • Presents a practical approach to robust design through design of experiments. • Offers a balance between statistical and industrial aspects of robust design. • Includes practical exercises, making the book useful for teaching. • Covers both physical and virtual approaches to robust design. • Supported by an accompanying website (www.wiley/com/go/robust) featuring MATLAB® scripts and solutions to exercises. • Written by an experienced industrial design practitioner. This book’s state of the art perspective will be of benefit to practitioners of robust design in industry, consultants providing training in Design for Six Sigma, and quality engineers. It will also be a valuable resource for specialized university courses in statistics or quality engineering.

Statistical Robust Design: An Industrial Perspective

by Magnus Arner

A UNIQUELY PRACTICAL APPROACH TO ROBUST DESIGN FROM A STATISTICAL AND ENGINEERING PERSPECTIVE Variation in environment, usage conditions, and the manufacturing process has long presented a challenge in product engineering, and reducing variation is universally recognized as a key to improving reliability and productivity. One key and cost-effective way to achieve this is by robust design – making the product as insensitive as possible to variation. With Design for Six Sigma training programs primarily in mind, the author of this book offers practical examples that will help to guide product engineers through every stage of experimental design: formulating problems, planning experiments, and analysing data. He discusses both physical and virtual techniques, and includes numerous exercises and solutions that make the book an ideal resource for teaching or self-study. • Presents a practical approach to robust design through design of experiments. • Offers a balance between statistical and industrial aspects of robust design. • Includes practical exercises, making the book useful for teaching. • Covers both physical and virtual approaches to robust design. • Supported by an accompanying website (www.wiley/com/go/robust) featuring MATLAB® scripts and solutions to exercises. • Written by an experienced industrial design practitioner. This book’s state of the art perspective will be of benefit to practitioners of robust design in industry, consultants providing training in Design for Six Sigma, and quality engineers. It will also be a valuable resource for specialized university courses in statistics or quality engineering.

Statistical Rules of Thumb (Wiley Series in Probability and Statistics #699)

by Gerald van Belle

Praise for the First Edition: "For a beginner [this book] is a treasure trove; for an experienced person it can provide new ideas on how better to pursue the subject of applied statistics." —Journal of Quality Technology Sensibly organized for quick reference, Statistical Rules of Thumb, Second Edition compiles simple rules that are widely applicable, robust, and elegant, and each captures key statistical concepts. This unique guide to the use of statistics for designing, conducting, and analyzing research studies illustrates real-world statistical applications through examples from fields such as public health and environmental studies. Along with an insightful discussion of the reasoning behind every technique, this easy-to-use handbook also conveys the various possibilities statisticians must think of when designing and conducting a study or analyzing its data. Each chapter presents clearly defined rules related to inference, covariation, experimental design, consultation, and data representation, and each rule is organized and discussed under five succinct headings: introduction; statement and illustration of the rule; the derivation of the rule; a concluding discussion; and exploration of the concept's extensions. The author also introduces new rules of thumb for topics such as sample size for ratio analysis, absolute and relative risk, ANCOVA cautions, and dichotomization of continuous variables. Additional features of the Second Edition include: Additional rules on Bayesian topics New chapters on observational studies and Evidence-Based Medicine (EBM) Additional emphasis on variation and causation Updated material with new references, examples, and sources A related Web site provides a rich learning environment and contains additional rules, presentations by the author, and a message board where readers can share their own strategies and discoveries. Statistical Rules of Thumb, Second Edition is an ideal supplementary book for courses in experimental design and survey research methods at the upper-undergraduate and graduate levels. It also serves as an indispensable reference for statisticians, researchers, consultants, and scientists who would like to develop an understanding of the statistical foundations of their research efforts. A related website www.vanbelle.org provides additional rules, author presentations and more.

Statistical Rules of Thumb (Wiley Series in Probability and Statistics #699)

by Gerald van Belle

Praise for the First Edition: "For a beginner [this book] is a treasure trove; for an experienced person it can provide new ideas on how better to pursue the subject of applied statistics." —Journal of Quality Technology Sensibly organized for quick reference, Statistical Rules of Thumb, Second Edition compiles simple rules that are widely applicable, robust, and elegant, and each captures key statistical concepts. This unique guide to the use of statistics for designing, conducting, and analyzing research studies illustrates real-world statistical applications through examples from fields such as public health and environmental studies. Along with an insightful discussion of the reasoning behind every technique, this easy-to-use handbook also conveys the various possibilities statisticians must think of when designing and conducting a study or analyzing its data. Each chapter presents clearly defined rules related to inference, covariation, experimental design, consultation, and data representation, and each rule is organized and discussed under five succinct headings: introduction; statement and illustration of the rule; the derivation of the rule; a concluding discussion; and exploration of the concept's extensions. The author also introduces new rules of thumb for topics such as sample size for ratio analysis, absolute and relative risk, ANCOVA cautions, and dichotomization of continuous variables. Additional features of the Second Edition include: Additional rules on Bayesian topics New chapters on observational studies and Evidence-Based Medicine (EBM) Additional emphasis on variation and causation Updated material with new references, examples, and sources A related Web site provides a rich learning environment and contains additional rules, presentations by the author, and a message board where readers can share their own strategies and discoveries. Statistical Rules of Thumb, Second Edition is an ideal supplementary book for courses in experimental design and survey research methods at the upper-undergraduate and graduate levels. It also serves as an indispensable reference for statisticians, researchers, consultants, and scientists who would like to develop an understanding of the statistical foundations of their research efforts. A related website www.vanbelle.org provides additional rules, author presentations and more.

Statistical Science in the Courtroom (Statistics for Social and Behavioral Sciences)

by Joseph L. Gastwirth

Expert testimony relying on scientific and other specialized evidence has come under increased scrutiny by the legal system. A trilogy of recent U.S. Supreme Court cases has assigned judges the task of assessing the relevance and reliability of proposed expert testimony. In conjunction with the Federal judiciary, the American Association for the Advancement of Science has initiated a project to provide judges indicating a need with their own expert. This concern with the proper interpretation of scientific evidence, especially that of a probabilistic nature, has also occurred in England, Australia and in several European countries. Statistical Science in the Courtroom is a collection of articles written by statisticians and legal scholars who have been concerned with problems arising in the use of statistical evidence. A number of articles describe DNA evidence and the difficulties of properly calculating the probability that a random individual's profile would "match" that of the evidence as well as the proper way to intrepret the result. In addition to the technical issues, several authors tell about their experiences in court. A few have become disenchanted with their involvement and describe the events that led them to devote less time to this application. Other articles describe the role of statistical evidence in cases concerning discrimination against minorities, product liability, environmental regulation, the appropriateness and fairness of sentences and how being involved in legal statistics has raised interesting statistical problems requiring further research.

Statistical Shape Analysis: With Applications in R (Wiley Series in Probability and Statistics #995)

by Ian L. Dryden Kanti V. Mardia

A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while retaining sufficient detail for more specialist statisticians to appreciate the challenges and opportunities of this new field. Computer code has been included for instructional use, along with exercises to enable readers to implement the applications themselves in R and to follow the key ideas by hands-on analysis. Statistical Shape Analysis: with Applications in R will offer a valuable introduction to this fast-moving research area for statisticians and other applied scientists working in diverse areas, including archaeology, bioinformatics, biology, chemistry, computer science, medicine, morphometics and image analysis .

Statistical Shape Analysis: With Applications in R (Wiley Series in Probability and Statistics #995)

by Ian L. Dryden Kanti V. Mardia

A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while retaining sufficient detail for more specialist statisticians to appreciate the challenges and opportunities of this new field. Computer code has been included for instructional use, along with exercises to enable readers to implement the applications themselves in R and to follow the key ideas by hands-on analysis. Statistical Shape Analysis: with Applications in R will offer a valuable introduction to this fast-moving research area for statisticians and other applied scientists working in diverse areas, including archaeology, bioinformatics, biology, chemistry, computer science, medicine, morphometics and image analysis .

Statistical Signal Processing: Modelling and Estimation (Advanced Textbooks in Control and Signal Processing)

by T. Chonavel

The only book on the subject at this level, this is a well written formalised and concise presentation of the basis of statistical signal processing. It teaches a wide variety of techniques, demonstrating how they can be applied to many different situations.

Statistical Signal Processing: Frequency Estimation (SpringerBriefs in Statistics)

by Debasis Kundu Swagata Nandi

Signal processing may broadly be considered to involve the recovery of information from physical observations. The received signal is usually disturbed by thermal, electrical, atmospheric or intentional interferences. Due to the random nature of the signal, statistical techniques play an important role in analyzing the signal. Statistics is also used in the formulation of the appropriate models to describe the behavior of the system, the development of appropriate techniques for estimation of model parameters and the assessment of the model performances. Statistical signal processing basically refers to the analysis of random signals using appropriate statistical techniques. The main aim of this book is to introduce different signal processing models which have been used in analyzing periodic data, and different statistical and computational issues involved in solving them. We discuss in detail the sinusoidal frequency model which has been used extensively in analyzing periodic data occuring in various fields. We have tried to introduce different associated models and higher dimensional statistical signal processing models which have been further discussed in the literature. Different real data sets have been analyzed to illustrate how different models can be used in practice. Several open problems have been indicated for future research.

Statistical Signal Processing: Frequency Estimation (Springerbriefs In Statistics Ser.)

by Swagata Nandi Debasis Kundu

This book introduces readers to various signal processing models that have been used in analyzing periodic data, and discusses the statistical and computational methods involved. Signal processing can broadly be considered to be the recovery of information from physical observations. The received signals are usually disturbed by thermal, electrical, atmospheric or intentional interferences, and due to their random nature, statistical techniques play an important role in their analysis. Statistics is also used in the formulation of appropriate models to describe the behavior of systems, the development of appropriate techniques for estimation of model parameters and the assessment of the model performances. Analyzing different real-world data sets to illustrate how different models can be used in practice, and highlighting open problems for future research, the book is a valuable resource for senior undergraduate and graduate students specializing in mathematics or statistics.

Statistical Size Distributions in Economics and Actuarial Sciences (Wiley Series in Probability and Statistics #470)

by Christian Kleiber Samuel Kotz

A comprehensive account of economic size distributions around the world and throughout the years In the course of the past 100 years, economists and applied statisticians have developed a remarkably diverse variety of income distribution models, yet no single resource convincingly accounts for all of these models, analyzing their strengths and weaknesses, similarities and differences. Statistical Size Distributions in Economics and Actuarial Sciences is the first collection to systematically investigate a wide variety of parametric models that deal with income, wealth, and related notions. Christian Kleiber and Samuel Kotz survey, compliment, compare, and unify all of the disparate models of income distribution, highlighting at times a lack of coordination between them that can result in unnecessary duplication. Considering models from eight languages and all continents, the authors discuss the social and economic implications of each as well as distributions of size of loss in actuarial applications. Specific models covered include: Pareto distributions Lognormal distributions Gamma-type size distributions Beta-type size distributions Miscellaneous size distributions Three appendices provide brief biographies of some of the leading players along with the basic properties of each of the distributions. Actuaries, economists, market researchers, social scientists, and physicists interested in econophysics will find Statistical Size Distributions in Economics and Actuarial Sciences to be a truly one-of-a-kind addition to the professional literature.

The Statistical Stability Phenomenon (Mathematical Engineering)

by Igor I. Gorban

This monograph investigates violations of statistical stability of physical events, variables, and processes and develops a new physical-mathematical theory taking into consideration such violations – the theory of hyper-random phenomena. There are five parts. The first describes the phenomenon of statistical stability and its features, and develops methods for detecting violations of statistical stability, in particular when data is limited. The second part presents several examples of real processes of different physical nature and demonstrates the violation of statistical stability over broad observation intervals. The third part outlines the mathematical foundations of the theory of hyper-random phenomena, while the fourth develops the foundations of the mathematical analysis of divergent and many-valued functions. The fifth part contains theoretical and experimental studies of statistical laws where there is violation of statistical stability.The monograph should be of particular interest to engineers and scientists in general who study the phenomenon of statistical stability and use statistical methods for high-precision measurements, prediction, and signal processing over long observation intervals.

Statistical Structure of Quantum Theory (Lecture Notes in Physics Monographs #67)

by Alexander S. Holevo

New ideas on the mathematical foundations of quantum mechanics, related to the theory of quantum measurement, as well as the emergence of quantum optics, quantum electronics and optical communications have shown that the statistical structure of quantum mechanics deserves special investigation. In the meantime it has become a mature subject. In this book, the author, himself a leading researcher in this field, surveys the basic principles and results of the theory, concentrating on mathematically precise formulations. Special attention is given to the measurement dynamics. The presentation is pragmatic, concentrating on the ideas and their motivation. For detailed proofs, the readers, researchers and graduate students, are referred to the extensively documented literature.

Statistical Studies of Income, Poverty and Inequality in Europe: Computing and Graphics in R using EU-SILC

by Nicholas T. Longford

There is no shortage of incentives to study and reduce poverty in our societies. Poverty is studied in economics and political sciences, and population surveys are an important source of information about it. The design and analysis of such surveys is principally a statistical subject matter and the computer is essential for their data compilation

Statistical Tables: for Science, Engineering, Management and Business Studies

by J Murdoch J A Barnes

Within the third edition, the number of tables has been extended from 35 to 41, and several other features have been added, including worked examples of all the new tables (6) and useful formulae. New tables have been included on the basis that they are less commonly available and yet have been found to be useful in practcal work; they include Tukey's Wholly Significant Difference and a new section on Non-Parametric Tables. An American version of Control Chart limits has also been included for the US market. Worked examples of the use of new tables have been included at the end of the book.

Statistical Tables and Formulae (Springer Texts in Statistics)

by Stephen Kokoska Christopher Nevison

All students and professionals in statistics should refer to this volume as it is a handy reference source for statistical formulas and information on basic probability distributions. It contains carefully designed and well laid out tables for standard statistical distributions (including Binomial, Poisson, Normal, and Chi-squared). In addition, there are several tables of Critical Values for various statistics tests.

Statistical Techniques for Neuroscientists (Foundations and Innovations in Neurobiology)

by Young K. Truong Mechelle M. Lewis

Statistical Techniques for Neuroscientists introduces new and useful methods for data analysis involving simultaneous recording of neuron or large cluster (brain region) neuron activity. The statistical estimation and tests of hypotheses are based on the likelihood principle derived from stationary point processes and time series. Algorithms and software development are given in each chapter to reproduce the computer simulated results described therein. The book examines current statistical methods for solving emerging problems in neuroscience. These methods have been applied to data involving multichannel neural spike train, spike sorting, blind source separation, functional and effective neural connectivity, spatiotemporal modeling, and multimodal neuroimaging techniques. The author provides an overview of various methods being applied to specific research areas of neuroscience, emphasizing statistical principles and their software. The book includes examples and experimental data so that readers can understand the principles and master the methods. The first part of the book deals with the traditional multivariate time series analysis applied to the context of multichannel spike trains and fMRI using respectively the probability structures or likelihood associated with time-to-fire and discrete Fourier transforms (DFT) of point processes. The second part introduces a relatively new form of statistical spatiotemporal modeling for fMRI and EEG data analysis. In addition to neural scientists and statisticians, anyone wishing to employ intense computing methods to extract important features and information directly from data rather than relying heavily on models built on leading cases such as linear regression or Gaussian processes will find this book extremely helpful.

Refine Search

Showing 47,876 through 47,900 of 54,582 results