Browse Results

Showing 7,601 through 7,625 of 55,511 results

Stochastic Communities: A Mathematical Theory of Biodiversity

by A. K. Dewdney

Stochastic Communities presents a theory of biodiversity by analyzing the distribution of abundances among species in the context of a community. The basis of this theory is a distribution called the "J distribution." This distribution is a pure hyperbola and mathematically implied by the "stochastic species hypothesis" assigning equal probabilities of birth and death within the population of each species over varying periods of time. The J distribution in natural communities has strong empirical support resulting from a meta-study and strong theoretical support from a theorem that is mathematically implied by the stochastic species hypothesis.

Advances in Algebra: SRAC 2017, Mobile, Alabama, USA, March 17-19 (Springer Proceedings in Mathematics & Statistics #277)

by Jörg Feldvoss Lauren Grimley Drew Lewis Andrei Pavelescu Cornelius Pillen

This proceedings volume covers a range of research topics in algebra from the Southern Regional Algebra Conference (SRAC) that took place in March 2017. Presenting theory as well as computational methods, featured survey articles and research papers focus on ongoing research in algebraic geometry, ring theory, group theory, and associative algebras. Topics include algebraic groups, combinatorial commutative algebra, computational methods for representations of groups and algebras, group theory, Hopf-Galois theory, hypergroups, Lie superalgebras, matrix analysis, spherical and algebraic spaces, and tropical algebraic geometry. Since 1988, SRAC has been an important event for the algebra research community in the Gulf Coast Region and surrounding states, building a strong network of algebraists that fosters collaboration in research and education. This volume is suitable for graduate students and researchers interested in recent findings in computational and theoretical methods in algebra and representation theory.

Implementing a Standards-Based Curriculum in the Early Childhood Classroom

by Lora Battle Bailey

Implementing a Standards-Based Curriculum in the Early Childhood Classroom demonstrates how pre-service and in-service teachers can develop mathematics, language arts, and integrated curricula suitable for equipping young children with the knowledge, dispositions, and skills needed to operate successfully as 21st century learners. Chapters promote family-school partnerships, and each content area chapter (mathematics, language arts and integrated curriculum) will demonstrate assessment practices proven to be effective for detecting the impact of specific early childhood teaching methods on student learning.

The Analytical Foundations of Loop Antennas and Nano-Scaled Rings (Signals and Communication Technology)

by Arnold McKinley

This book develops the analytical theory of perfectly conducting and lossy metal, circular, round-wire loop antennas and nano-scaled rings from the radio frequency (RF) regime through infrared and the optical region. It does so from an antenna theory perspective. It is the first time that all of the historical material found in the literature has appeared in one place. It includes, particularly, material that has appeared in the literature only in the last decade and some new material that has not yet been published. The book derives the input impedance, resonances and anti-resonances, the RLC circuit model representation, and radiation patterns not only of closed loops and rings, but also of loops and rings loaded randomly and multiply with resistive and reactive impedances. Every derivation is compared with simulations run in Microwave Studio (MWS). It looks carefully at the physical response of loop antennas and nano-rings coupled to a source at one point in the periphery and at such rings illuminated by a plane wave arriving from every different direction with the E-field in all polarizations. The book ends with a brief look at polygonal loops, two dimensional arrays of nano-rings, and Yagi-Uda arrays.

A Practical Guide to Managing Clinical Trials

by JoAnn Pfeiffer Cris Wells

A Practical Guide to Managing Clinical Trials is a basic, comprehensive guide to conducting clinical trials. Designed for individuals working in research site operations, this user-friendly reference guides the reader through each step of the clinical trial process from site selection, to site set-up, subject recruitment, study visits, and to study close-out. Topics include staff roles/responsibilities/training, budget and contract review and management, subject study visits, data and document management, event reporting, research ethics, audits and inspections, consent processes, IRB, FDA regulations, and good clinical practices. Each chapter concludes with a review of key points and knowledge application. Unique to this book is "A View from India," a chapter-by-chapter comparison of clinical trial practices in India versus the U.S. Throughout the book and in Chapter 10, readers will glimpse some of the challenges and opportunities in the emerging and growing market of Indian clinical trials.

Developing Young Children’s Mathematical Learning Outdoors: Linking Pedagogy and Practice

by Lynda Keith

Developing Young Children’s Mathematical Learning Outdoors provides detailed guidance and practical advice on planning mathematical experiences for young children outdoors. By examining the key features of a mathematically rich outdoor environment, it illustrates how this can motivate children in leading their own learning and mathematical thinking.Drawing upon the author’s wealth of experience, the book provides support for students and early years' practitioners in developing a deeper understanding of how to plan quality experiences, which combine pedagogy with effective practice. Covering all aspects of mathematics, it identifies meaningful contexts and shows how adults can use open-ended questions and prompts to promote children’s mathematical play outside.With rich case studies and reflective questions included throughout, as well as suggestions for useful resources to put the ideas in the book into practice, it is essential reading for all those that want to develop curious and creative mathematical thinkers in the early years.

Exploratory Multivariate Analysis by Example Using R (Chapman & Hall/CRC Computer Science & Data Analysis)

by Francois Husson Sebastien Le Jérôme Pagès

Full of real-world case studies and practical advice, Exploratory Multivariate Analysis by Example Using R, Second Edition focuses on four fundamental methods of multivariate exploratory data analysis that are most suitable for applications. It covers principal component analysis (PCA) when variables are quantitative, correspondence analysis (CA) a

Disk-Based Algorithms for Big Data

by Christopher G. Healey

Disk-Based Algorithms for Big Data is a product of recent advances in the areas of big data, data analytics, and the underlying file systems and data management algorithms used to support the storage and analysis of massive data collections. The book discusses hard disks and their impact on data management, since Hard Disk Drives continue to be common in large data clusters. It also explores ways to store and retrieve data though primary and secondary indices. This includes a review of different in-memory sorting and searching algorithms that build a foundation for more sophisticated on-disk approaches like mergesort, B-trees, and extendible hashing. Following this introduction, the book transitions to more recent topics, including advanced storage technologies like solid-state drives and holographic storage; peer-to-peer (P2P) communication; large file systems and query languages like Hadoop/HDFS, Hive, Cassandra, and Presto; and NoSQL databases like Neo4j for graph structures and MongoDB for unstructured document data. Designed for senior undergraduate and graduate students, as well as professionals, this book is useful for anyone interested in understanding the foundations and advances in big data storage and management, and big data analytics. About the Author Dr. Christopher G. Healey is a tenured Professor in the Department of Computer Science and the Goodnight Distinguished Professor of Analytics in the Institute for Advanced Analytics, both at North Carolina State University in Raleigh, North Carolina. He has published over 50 articles in major journals and conferences in the areas of visualization, visual and data analytics, computer graphics, and artificial intelligence. He is a recipient of the National Science Foundation’s CAREER Early Faculty Development Award and the North Carolina State University Outstanding Instructor Award. He is a Senior Member of the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE), and an Associate Editor of ACM Transaction on Applied Perception, the leading worldwide journal on the application of human perception to issues in computer science.

Randomization, Masking, and Allocation Concealment (Chapman & Hall/CRC Biostatistics Series)

by Vance Berger

Randomization, Masking, and Allocation Concealment is indispensable for any trial researcher who wants to use state of the art randomization methods, and also wants to be able to describe these methods correctly. Far too often the subtle nuances that distinguish proper randomization from flawed randomization are completely ignored in trial reports that state only that randomization was used, with no additional information. Experience has shown that in many cases, the type of randomization that was used was flawed. It is only a matter of time before medical journals and regulatory agencies come to realize that we can no longer rely on (or publish) flawed trials, and that flawed randomization in and of itself disqualifies a trial from being robust or high quality, even if that trial is of high quality otherwise. This book will help to clarify the role randomization plays in ensuring internal validity, and in drawing valid inferences from the data. The various chapters cover a variety of randomization methods, and are not limited to the most common (and most flawed) ones. Readers will come away with a profound understanding of what constitutes a valid randomization procedure, so that they can distinguish the valid from the flawed among not only existing methods but also methods yet to be developed.

Programming Language Explorations

by Ray Toal Rachel Rivera Alexander Schneider Eileen Choe

Programming Language Explorations is a tour of several modern programming languages in use today. The book teaches fundamental language concepts using a language-by-language approach. As each language is presented, the authors introduce new concepts as they appear, and revisit familiar ones, comparing their implementation with those from languages seen in prior chapters. The goal is to present and explain common theoretical concepts of language design and usage, illustrated in the context of practical language overviews.Twelve languages have been carefully chosen to illustrate a wide range of programming styles and paradigms. The book introduces each language with a common trio of example programs, and continues with a brief tour of its basic elements, type system, functional forms, scoping rules, concurrency patterns, and sometimes, metaprogramming facilities.Each language chapter ends with a summary, pointers to open source projects, references to materials for further study, and a collection of exercises, designed as further explorations. Following the twelve featured language chapters, the authors provide a brief tour of over two dozen additional languages, and a summary chapter bringing together many of the questions explored throughout the text.Targeted to both professionals and advanced college undergraduates looking to expand the range of languages and programming patterns they can apply in their work and studies, the book pays attention to modern programming practice, covers cutting-edge languages and patterns, and provides many runnable examples, all of which can be found in an online GitHub repository. The exploration style places this book between a tutorial and a reference, with a focus on the concepts and practices underlying programming language design and usage. Instructors looking for material to supplement a programming languages or software engineering course may find the approach unconventional, but hopefully, a lot more fun.

Quantifying Software: Global and Industry Perspectives

by Capers Jones

Software is one of the most important products in human history and is widely used by all industries and all countries. It is also one of the most expensive and labor-intensive products in human history. Software also has very poor quality that has caused many major disasters and wasted many millions of dollars. Software is also the target of frequent and increasingly serious cyber-attacks. Among the reasons for these software problems is a chronic lack of reliable quantified data. This reference provides quantified data from many countries and many industries based on about 26,000 projects developed using a variety of methodologies and team experience levels. The data has been gathered between 1970 and 2017, so interesting historical trends are available. Since current average software productivity and quality results are suboptimal, this book focuses on "best in class" results and shows not only quantified quality and productivity data from best-in-class organizations, but also the technology stacks used to achieve best-in-class results. The overall goal of this book is to encourage the adoption of best-in-class software metrics and best-in-class technology stacks. It does so by providing current data on average software schedules, effort, costs, and quality for several industries and countries. Because productivity and quality vary by technology and size, the book presents quantitative results for applications between 100 function points and 100,000 function points. It shows quality results using defect potential and DRE metrics because the number one cost driver for software is finding and fixing bugs. The book presents data on cost of quality for software projects and discusses technical debt, but that metric is not standardized. Finally, the book includes some data on three years of software maintenance and enhancements as well as some data on total cost of ownership.

The Fractional Laplacian

by C. Pozrikidis

The fractional Laplacian, also called the Riesz fractional derivative, describes an unusual diffusion process associated with random excursions. The Fractional Laplacian explores applications of the fractional Laplacian in science, engineering, and other areas where long-range interactions and conceptual or physical particle jumps resulting in an irregular diffusive or conductive flux are encountered. Presents the material at a level suitable for a broad audience of scientists and engineers with rudimentary background in ordinary differential equations and integral calculus Clarifies the concept of the fractional Laplacian for functions in one, two, three, or an arbitrary number of dimensions defined over the entire space, satisfying periodicity conditions, or restricted to a finite domain Covers physical and mathematical concepts as well as detailed mathematical derivations Develops a numerical framework for solving differential equations involving the fractional Laplacian and presents specific algorithms accompanied by numerical results in one, two, and three dimensions Discusses viscous flow and physical examples from scientific and engineering disciplines Written by a prolific author well known for his contributions in fluid mechanics, biomechanics, applied mathematics, scientific computing, and computer science, the book emphasizes fundamental ideas and practical numerical computation. It includes original material and novel numerical methods.

Environmental Systems Analysis with MATLAB®

by Stefano Marsili-Libelli

Explore the inner workings of environmental processes using a mathematical approach. Environmental Systems Analysis with MATLAB® combines environmental science concepts and system theory with numerical techniques to provide a better understanding of how our environment works. The book focuses on building mathematical models of environmental systems, and using these models to analyze their behaviors. Designed with the environmental professional in mind, it offers a practical introduction to developing the skills required for managing environmental modeling and data handling. The book follows a logical sequence from the basic steps of model building and data analysis to implementing these concepts into working computer codes, and then on to assessing their results. It describes data processing (rarely considered in environmental analysis); outlines the tools needed to successfully analyze data and develop models, and moves on to real-world problems. The author illustrates in the first four chapters the methodological aspects of environmental systems analysis, and in subsequent chapters applies them to specific environmental concerns. The accompanying software bundle is freely downloadable from the book web site. It follows the chapters sequence and provides a hands-on experience, allowing the reader to reproduce the figures in the text and experiment by varying the problem setting. A basic MATLAB literacy is required to get the most out of the software. Ideal for coursework and self-study, this offering: Deals with the basic concepts of environmental modeling and identification, both from the mechanistic and the data-driven viewpoint Provides a unifying methodological approach to deal with specific aspects of environmental modeling: population dynamics, flow systems, and environmental microbiology Assesses the similarities and the differences of microbial processes in natural and man-made environments Analyzes several aquatic ecosystems’ case studies Presents an application of an extended Streeter & Phelps (S&P) model Describes an ecological method to estimate the bioavailable nutrients in natural waters Considers a lagoon ecosystem from several viewpoints, including modeling and management, and more

Introductory Fisheries Analyses with R (Chapman & Hall/CRC The R Series)

by Derek H. Ogle

A How-To Guide for Conducting Common Fisheries-Related Analyses in R Introductory Fisheries Analyses with R provides detailed instructions on performing basic fisheries stock assessment analyses in the R environment. Accessible to practicing fisheries scientists as well as advanced undergraduate and graduate students, the book demonstrates the flexibility and power of R, offers insight into the reproducibility of script-based analyses, and shows how the use of R leads to more efficient and productive work in fisheries science. The first three chapters present a minimal introduction to the R environment that builds a foundation for the fisheries-specific analyses in the remainder of the book. These chapters help you become familiar with R for basic fisheries analyses and graphics. Subsequent chapters focus on methods to analyze age comparisons, age-length keys, size structure, weight-length relationships, condition, abundance (from capture-recapture and depletion data), mortality rates, individual growth, and the stock-recruit relationship. The fundamental statistical methods of linear regression, analysis of variance (ANOVA), and nonlinear regression are demonstrated within the contexts of these common fisheries analyses. For each analysis, the author completely explains the R functions and provides sufficient background information so that you can confidently implement each method. Web Resource The author’s website at http://derekogle.com/IFAR/ includes the data files and R code for each chapter, enabling you to reproduce the results in the book as well as create your own scripts. The site also offers supplemental code for more advanced analyses and practice exercises for every chapter.

Business Analytics for Decision Making

by Steven Orla Kimbrough Hoong Chuin Lau

Business Analytics for Decision Making, the first complete text suitable for use in introductory Business Analytics courses, establishes a national syllabus for an emerging first course at an MBA or upper undergraduate level. This timely text is mainly about model analytics, particularly analytics for constrained optimization. It uses implementations that allow students to explore models and data for the sake of discovery, understanding, and decision making.Business analytics is about using data and models to solve various kinds of decision problems. There are three aspects for those who want to make the most of their analytics: encoding, solution design, and post-solution analysis. This textbook addresses all three. Emphasizing the use of constrained optimization models for decision making, the book concentrates on post-solution analysis of models. The text focuses on computationally challenging problems that commonly arise in business environments. Unique among business analytics texts, it emphasizes using heuristics for solving difficult optimization problems important in business practice by making best use of methods from Computer Science and Operations Research. Furthermore, case studies and examples illustrate the real-world applications of these methods. The authors supply examples in Excel®, GAMS, MATLAB®, and OPL. The metaheuristics code is also made available at the book's website in a documented library of Python modules, along with data and material for homework exercises. From the beginning, the authors emphasize analytics and de-emphasize representation and encoding so students will have plenty to sink their teeth into regardless of their computer programming experience.

Smart Use of State Public Health Data for Health Disparity Assessment

by Ge Lin Ming Qu

Health services are often fragmented along organizational lines with limited communication among the public health–related programs or organizations, such as mental health, social services, and public health services. This can result in disjointed decision making without necessary data and knowledge, organizational fragmentation, and disparate knowledge development across the full array of public health needs. When new questions or challenges arise that require collaboration, individual public health practitioners (e.g., surveillance specialists and epidemiologists) often do not have the time and energy to spend on them. Smart Use of State Public Health Data for Health Disparity Assessment promotes data integration to aid crosscutting program collaboration. It explains how to maximize the use of various datasets from state health departments for assessing health disparity and for disease prevention. The authors offer practical advice on state public health data use, their strengths and weaknesses, data management insight, and lessons learned. They propose a bottom-up approach for building an integrated public health data warehouse that includes localized public health data. The book is divided into three sections: Section I has seven chapters devoted to knowledge and skill preparations for recognizing disparity issues and integrating and analyzing local public health data. Section II provides a systematic surveillance effort by linking census tract poverty to other health disparity dimensions. Section III provides in-depth studies related to Sections I and II. All data used in the book have been geocoded to the census tract level, making it possible to go more local, even down to the neighborhood level.

Spatial Microsimulation with R (Chapman & Hall/CRC The R Series)

by Robin Lovelace Morgane Dumont

Generate and Analyze Multi-Level Data Spatial microsimulation involves the generation, analysis, and modeling of individual-level data allocated to geographical zones. Spatial Microsimulation with R is the first practical book to illustrate this approach in a modern statistical programming language. Get Insight into Complex BehaviorsThe book progresses from the principles underlying population synthesis toward more complex issues such as household allocation and using the results of spatial microsimulation for agent-based modeling. This equips you with the skills needed to apply the techniques to real-world situations. The book demonstrates methods for population synthesis by combining individual and geographically aggregated datasets using the recent R packages ipfp and mipfp. This approach represents the "best of both worlds" in terms of spatial resolution and person-level detail, overcoming issues of data confidentiality and reproducibility. Implement the Methods on Your Own DataFull of reproducible examples using code and data, the book is suitable for students and applied researchers in health, economics, transport, geography, and other fields that require individual-level data allocated to small geographic zones. By explaining how to use tools for modeling phenomena that vary over space, the book enhances your knowledge of complex systems and empowers you to provide evidence-based policy guidance.

Data Mining with R: Learning with Case Studies, Second Edition (Chapman & Hall/CRC Data Mining and Knowledge Discovery Series)

by Luis Torgo

Data Mining with R: Learning with Case Studies, Second Edition uses practical examples to illustrate the power of R and data mining. Providing an extensive update to the best-selling first edition, this new edition is divided into two parts. The first part will feature introductory material, including a new chapter that provides an introduction to data mining, to complement the already existing introduction to R. The second part includes case studies, and the new edition strongly revises the R code of the case studies making it more up-to-date with recent packages that have emerged in R. The book does not assume any prior knowledge about R. Readers who are new to R and data mining should be able to follow the case studies, and they are designed to be self-contained so the reader can start anywhere in the document. The book is accompanied by a set of freely available R source files that can be obtained at the book’s web site. These files include all the code used in the case studies, and they facilitate the "do-it-yourself" approach followed in the book. Designed for users of data analysis tools, as well as researchers and developers, the book should be useful for anyone interested in entering the "world" of R and data mining. About the Author Luís Torgo is an associate professor in the Department of Computer Science at the University of Porto in Portugal. He teaches Data Mining in R in the NYU Stern School of Business’ MS in Business Analytics program. An active researcher in machine learning and data mining for more than 20 years, Dr. Torgo is also a researcher in the Laboratory of Artificial Intelligence and Data Analysis (LIAAD) of INESC Porto LA.

Process Capability Analysis: Estimating Quality

by Neil W. Polhemus

Process Capability Analysis: Estimating Quality presents a systematic exploration of process capability analysis and how it may be used to estimate quality. The book is designed for practitioners who are tasked with insuring a high level of quality for the products and services offered by their organizations. Along with describing the necessary statistical theory, the book illustrates the practical application of the techniques to data that do not always satisfy the standard assumptions. The first two chapters deal with attribute data, where the estimation of quality is restricted to counts of nonconformities. Both classical and Bayesian methods are discussed. The rest of the book deals with variable data, including extensive discussions of both capability indices and statistical tolerance limits. Considerable emphasis is placed on methods for handling non-normal data. Also included are discussions of topics often omitted in discussions of process capability, including multivariate capability indices, multivariate tolerance limits, and capability control charts. A separate chapter deals with the problem of determining adequate sample sizes for estimating process capability. Features: Comprehensive treatment of the subject with consistent theme of estimating percent of nonconforming product or service. Includes Bayesian methods. Extension of univariate techniques to multivariate data. Demonstration of all techniques using Statgraphics data analysis software. Neil Polhemus is Chief Technology Officer at Statgraphics Technology and the original developer of the Statgraphics program for statistical analysis and data visualization. Dr. Polhemus spent 6 years on the faculty of the School of Engineering and Applied Science at Princeton University before moving full-time to software development and consulting. He has taught courses dealing with statistical process control, design of experiments and data analysis for more than 100 companies and government agencies.

Business Analytics

by Dinabandhu Bag

This book provides a first-hand account of business analytics and its implementation, and an account of the brief theoretical framework underpinning each component of business analytics. The themes of the book include (1) learning the contours and boundaries of business analytics which are in scope; (2) understanding the organization design aspects of an analytical organization; (3) providing knowledge on the domain focus of developing business activities for financial impact in functional analysis; and (4) deriving a whole gamut of business use cases in a variety of situations to apply the techniques. The book gives a complete, insightful understanding of developing and implementing analytical solution.

Family Math Night 6-8: Common Core State Standards in Action

by Jennifer Taylor-Cox Christine Oberdorf

Host Family Math Nights at your middle school—starting today! Family Math Nights are a great way for teachers to get parents involved in their children’s education and to promote math learning outside of the classroom. In this practical book, you’ll find step-by-step guidelines and activities to help you bring Family Math Nights to life. The enhanced second edition is aligned with the Common Core State Standards for Mathematical Content and Practice with new activities to help students explain their answers and write about math. It also comes with ready-to-use handouts that you can distribute during your event. With the resources in this book, you’ll have everything you need to help students learn essential math concepts—including ratios and proportional relationships, the number system, expressions and equations, geometry, and statistics and probability—in a fun and supportive environment. Special Features: The book is organized by math content, so you can quickly find activities that meet your needs. Each activity is easy to implement and includes a page of instructions educators can use to prepare the station, as well as a page for families that explains the activity and can be photocopied and displayed at the station. All of the family activities can be photocopied or downloaded from our website, www.routledge.com/9781138200999, so that you can distribute them during your event.

Using Children's Literature to Teach Problem Solving in Math: Addressing the Standards for Mathematical Practice in K–5

by Jeanne White

Learn how children’s literature can help K–5 students see the real-life applications of mathematical concepts. This user-friendly book shows how to use stories to engage students in building critical reasoning, abstract thinking, and communication skills, all while helping students understand the relevance of math in their everyday lives. Each chapter is dedicated to one of the eight Standards for Mathematical Practice, and offers examples of children’s literature that can be used to help students develop that practice. You’ll find out how to: Encourage students to persevere in solving mathematical problems and use multiple approaches to find the answer; Help students reason abstractly with the aid of concrete objects and visuals; Guide students in constructing arguments to explain their reasoning and engage in critical discussion with their peers; Teach students to recognize mathematical patterns and use them to solve problems efficiently; And more! The book offers activities for beginners as well as for more advanced problem solvers. Each chapter also provides guidance for ELLs and students with special needs, so no matter your classroom environment, you’ll be able to use these strategies to make math class more dynamic, engaging, and fun.

Cost Engineering Health Check: How Good are Those Numbers?

by Dale Shermon Mark Gilmour

High quality cost estimating gives a business leader confidence to make rational financial decisions. Whether you are a business leader or a cost estimating manager, you have a vested interest in understanding whether you can depend on your organisation's ability to generate accurate cost forecasts and estimates. But how can business leaders have confidence that the cost information that they are being provided with is of high quality? How can a cost estimating manager be sure that their team is providing high quality cost information? QinetiQ's Cost Engineering Health Check is used as a capability benchmarking tool to identify improvement opportunities within their clients' cost estimating capability, enabling them to focus on areas that have the potential to increase their competitiveness. High quality estimating leads to accurate budgets, a reduced potential for cost growth, accurate evaluation of risk exposure, and the opportunity to implement effective earned value management (EVM). The Cost Engineering Health Check employs a standardised competency framework that considers all aspects of cost estimating capability, and provides an objective assessment against both best practice and the industry standard. This framework is based on QinetiQ's long established, tried and tested, Knowledge Based Estimating (KBE) philosophy comprising Data, Tools, People and Process, with additional consideration given to cultural and stakeholder assessments.

Sport Analytics: A data-driven approach to sport business and management

by Gil Fried Ceyda Mumcu

The increasing availability of data has transformed the way sports are played, promoted and managed. This is the first textbook to explain how the big data revolution is having a profound influence across the sport industry, demonstrating how sport managers and business professionals can use analytical techniques to improve their professional practice. While other sports analytics books have focused on player performance data, this book shows how analytics can be applied to every functional area of sport business, from marketing and event management to finance and legal services. Drawing on research that spans the entire sport industry, it explains how data is influencing the most important decisions, from ticket sales and human resources to risk management and facility operations. Each chapter contains real world examples, industry profiles and extended case studies which are complimented by a companion website full of useful learning resources. Sport Analytics: A data-driven approach to sport business and management is an essential text for all sport management students and an invaluable reference for any sport management professional involved in operational research.

What If There Were No Significance Tests?: Classic Edition (Multivariate Applications Series)

by Lisa L. Harlow Stanley A. Mulaik James H. Steiger

The classic edition of What If There Were No Significance Tests? highlights current statistical inference practices. Four areas are featured as essential for making inferences: sound judgment, meaningful research questions, relevant design, and assessing fit in multiple ways. Other options (data visualization, replication or meta-analysis), other features (mediation, moderation, multiple levels or classes), and other approaches (Bayesian analysis, simulation, data mining, qualitative inquiry) are also suggested. The Classic Edition’s new Introduction demonstrates the ongoing relevance of the topic and the charge to move away from an exclusive focus on NHST, along with new methods to help make significance testing more accessible to a wider body of researchers to improve our ability to make more accurate statistical inferences. Part 1 presents an overview of significance testing issues. The next part discusses the debate in which significance testing should be rejected or retained. The third part outlines various methods that may supplement significance testing procedures. Part 4 discusses Bayesian approaches and methods and the use of confidence intervals versus significance tests. The book concludes with philosophy of science perspectives. Rather than providing definitive prescriptions, the chapters are largely suggestive of general issues, concerns, and application guidelines. The editors allow readers to choose the best way to conduct hypothesis testing in their respective fields. For anyone doing research in the social sciences, this book is bound to become "must" reading. Ideal for use as a supplement for graduate courses in statistics or quantitative analysis taught in psychology, education, business, nursing, medicine, and the social sciences, the book also benefits independent researchers in the behavioral and social sciences and those who teach statistics.

Refine Search

Showing 7,601 through 7,625 of 55,511 results