Browse Results

Showing 71,176 through 71,200 of 85,186 results

Scalable Big Data Architecture: A practitioners guide to choosing relevant Big Data architecture

by Bahaaldine Azarmi

This book highlights the different types of data architecture and illustrates the many possibilities hidden behind the term "Big Data", from the usage of No-SQL databases to the deployment of stream analytics architecture, machine learning, and governance. Scalable Big Data Architecture covers real-world, concrete industry use cases that leverage complex distributed applications , which involve web applications, RESTful API, and high throughput of large amount of data stored in highly scalable No-SQL data stores such as Couchbase and Elasticsearch. This book demonstrates how data processing can be done at scale from the usage of NoSQL datastores to the combination of Big Data distribution. When the data processing is too complex and involves different processing topology like long running jobs, stream processing, multiple data sources correlation, and machine learning, it’s often necessary to delegate the load to Hadoop or Spark and use the No-SQL to serve processed data in real time. This book shows you how to choose a relevant combination of big data technologies available within the Hadoop ecosystem. It focuses on processing long jobs, architecture, stream data patterns, log analysis, and real time analytics. Every pattern is illustrated with practical examples, which use the different open sourceprojects such as Logstash, Spark, Kafka, and so on. Traditional data infrastructures are built for digesting and rendering data synthesis and analytics from large amount of data. This book helps you to understand why you should consider using machine learning algorithms early on in the project, before being overwhelmed by constraints imposed by dealing with the high throughput of Big data. Scalable Big Data Architecture is for developers, data architects, and data scientists looking for a better understanding of how to choose the most relevant pattern for a Big Data project and which tools to integrate into that pattern.

Scalable Disruptors: Design Modelling Symposium Kassel 2024

by Christoph Gengnagel Mette Ramsgaard Thomsen Jan Wurm Philipp Eversmann Julian Lienhard

This book reflects and expands on current trends in the Architecture, Engineering and Construction (AEC) industries as they respond to the unfolding climate and biodiversity crisis. Shifting away from the traditional focuses, which are narrowly centered on efficiency, this book presents a variety of approaches to move the AEC community from a linear, extractive paradigm to circular and regenerative one. The book presents contributions including research papers and case studies, providing a comprehensive overview of the field as well as perspectives from related disciplines, such as computer science, biology and material science.

Scalable Enterprise Systems: An Introduction to Recent Advances (Integrated Series in Information Systems #3)

by Vittal Prabhu Soundar Kumara Manjunath Kamath

The National Science Foundation (NSF) is the leading sponsor of basic academic research in engineering, and its influence far exceeds its budget. We think NSF is at its best when it uses that influence to focus interest within the researcher community on critical new challenges and technologies. NSF's Scalable Enterprise Systems (SES) initiative, for which we were responsible in our successive terms in the division of Design, Manufacture and Industrial Innovation (DMII), was just such a venture. A collaborative effort spanning NSF's engineering and computer science directorates, SES sought to concentrate the energies of the academic engineering research community on developing a science base for designing, planning and controlling the extended, spatially and managerially distributed enterprises that have become the norm in the manufacture, distribution and sale of the products of U. S. industry. The of associated issues addressed included everything from management supply chains, to product design across teams of collaborating companies, to e-marketing and make-to-order manufacturing, to the information technology challenges of devising inter-operable planning and control tools that can scale with exploding enterprise size and scope. A total of 27 teams with nearly 100 investigators were selected from the 89 submitted proposals in the Phase I, exploratory part of the effort (see the list below). Seven of these were awarded larger multi-year grants to continue their research in Phase II. As the contents of this book amply illustrate, these investigations continue to flourish, with and without direct NSF support.

Scalable Hardware Verification with Symbolic Simulation

by Valeria Bertacco

This book is intended as an innovative overview of current formal verification methods, combined with an in-depth analysis of some advanced techniques to improve the scalability of these methods, and close the gap between design and verification in computer-aided design. Formal Verification: Scalable Hardware Verification with Symbolic Simulation explains current formal verification methods and provides an in-depth analysis of some advanced techniques to improve the scalability of these methods and close the gap between design and verification in computer-aided design. It provides the theoretical background required to present such methods and advanced techniques, i.e. Boolean function representations, models of sequential networks and, in particular, some novel algorithms to expose the disjoint support decompositions of Boolean functions, used in one of the scalable approaches.

Scalable High Performance Computing for Knowledge Discovery and Data Mining: A Special Issue of Data Mining and Knowledge Discovery Volume 1, No.4 (1997)

by Paul Stolorz Ron Musick

Scalable High Performance Computing for Knowledge Discovery and Data Mining brings together in one place important contributions and up-to-date research results in this fast moving area. Scalable High Performance Computing for Knowledge Discovery and Data Mining serves as an excellent reference, providing insight into some of the most challenging research issues in the field.

Scalable Information Systems: 5th International Conference, INFOSCALE 2014, Seoul, South Korea, September 25-26, 2014, Revised Selected Papers (Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering #139)

by Jason J. Jung Costin Badica Attila Kiss

This book constitutes the thoroughly refereed post-conference proceedings of the International Conference on Scalable Information Systems, INFOSCALE 2014, held in September 2014 in Seoul, South Korea. The 9 revised full papers presented were carefully reviewed and selected from 14 submissions. The papers cover a wide range of topics such as scalable data analysis and big data applications.

Scalable Information Systems: 4th International ICST Conference, INFOSCALE 2009, Hong Kong, June 10-11, 2009, Revised Selected Papers (Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering #18)

by Peter Mueller Jian-Nong Cao Cho-Li Wang

In view of the incessant growth of data and knowledge and the continued diversifi- tion of information dissemination on a global scale, scalability has become a ma- stream research area in computer science and information systems. The ICST INFO- SCALE conference is one of the premier forums for presenting new and exciting research related to all aspects of scalability, including system architecture, resource management, data management, networking, and performance. As the fourth conf- ence in the series, INFOSCALE 2009 was held in Hong Kong on June 10 and 11, 2009. The articles presented in this volume focus on a wide range of scalability issues and new approaches to tackle problems arising from the ever-growing size and c- plexity of information of all kind. More than 60 manuscripts were submitted, and the Program Committee selected 22 papers for presentation at the conference. Each s- mission was reviewed by three members of the Technical Program Committee.

Scalable Infrastructure for Distributed Sensor Networks

by S.S. Iyengar

This is the only book to cover infrastructure aspects of sensor networks in a comprehensive fashion. The only other books on sensor networks do not cover this topic or do so only superficially as part of a less-focussed multi-authored treatment.

Scalable Multi-core Architectures: Design Methodologies and Tools

by Dimitrios Soudris and Axel Jantsch

As Moore’s law continues to unfold, two important trends have recently emerged. First, the growth of chip capacity is translated into a corresponding increase of number of cores. Second, the parallelization of the computation and 3D integration technologies lead to distributed memory architectures. This book describes recent research that addresses urgent challenges in many-core architectures and application mapping. It addresses the architectural design of many core chips, memory and data management, power management, design and programming methodologies. It also describes how new techniques have been applied in various industrial case studies.

Scalable Multicasting over Next-Generation Internet: Design, Analysis and Applications

by Xiaohua Tian Yu Cheng

Next-generation Internet providers face high expectations, as contemporary users worldwide expect high-quality multimedia functionality in a landscape of ever-expanding network applications. This volume explores the critical research issue of turning today’s greatly enhanced hardware capacity to good use in designing a scalable multicast protocol for supporting large-scale multimedia services. Linking new hardware to improved performance in the Internet’s next incarnation is a research hot-spot in the computer communications field. The methodical presentation deals with the key questions in turn: from the mechanics of multicast protocols to current state-of-the-art designs, and from methods of theoretical analysis of these protocols to applying them in the ns2 network simulator, known for being hard to extend. The authors’ years of research in the field inform this thorough treatment, which covers details such as applying AOM (application-oriented multicast) protocol to IPTV provision and resolving the practical design issues thrown up in creating scalable AOM multicast service models.

Scalable Network Monitoring in High Speed Networks

by Baek-Young Choi Zhi-Li Zhang David Hung-Chang Du

Network monitoring serves as the basis for a wide scope of network, engineering and management operations. Precise network monitoring involves inspecting every packet traversing in a network. However, this is not feasible with future high-speed networks, due to significant overheads of processing, storing, and transferring measured data. Network Monitoring in High Speed Networks presents accurate measurement schemes from both traffic and performance perspectives, and introduces adaptive sampling techniques for various granularities of traffic measurement. The techniques allow monitoring systems to control the accuracy of estimations, and adapt sampling probability dynamically according to traffic conditions. The issues surrounding network delays for practical performance monitoring are discussed in the second part of this book. Case studies based on real operational network traces are provided throughout this book. Network Monitoring in High Speed Networks is designed as a secondary text or reference book for advanced-level students and researchers concentrating on computer science and electrical engineering. Professionals working within the networking industry will also find this book useful.

Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications (Studies in Computational Intelligence #33)

by Martin Pelikan Kumara Sastry Erick Cantú-Paz

I’m not usually a fan of edited volumes. Too often they are an incoherent hodgepodge of remnants, renegades, or rejects foisted upon an unsuspecting reading public under a misleading or fraudulent title. The volume Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications is a worthy addition to your library because it succeeds on exactly those dimensions where so many edited volumes fail. For example, take the title, Scalable Optimization via Probabilistic M- eling: From Algorithms to Applications. You need not worry that you’re going to pick up this book and ?nd stray articles about anything else. This book focuseslikealaserbeamononeofthehottesttopicsinevolutionary compu- tion over the last decade or so: estimation of distribution algorithms (EDAs). EDAs borrow evolutionary computation’s population orientation and sel- tionism and throw out the genetics to give us a hybrid of substantial power, elegance, and extensibility. The article sequencing in most edited volumes is hard to understand, but from the get go the editors of this volume have assembled a set of articles sequenced in a logical fashion. The book moves from design to e?ciency enhancement and then concludes with relevant applications. The emphasis on e?ciency enhancement is particularly important, because the data-mining perspectiveimplicitinEDAsopensuptheworldofoptimizationtonewme- ods of data-guided adaptation that can further speed solutions through the construction and utilization of e?ective surrogates, hybrids, and parallel and temporal decompositions.

Scalable Parallel Programming Applied to H.264/AVC Decoding (SpringerBriefs in Computer Science)

by Ben Juurlink Mauricio Alvarez-Mesa Chi Ching Chi Arnaldo Azevedo Cor Meenderinck Alex Ramirez

Existing software applications should be redesigned if programmers want to benefit from the performance offered by multi- and many-core architectures. Performance scalability now depends on the possibility of finding and exploiting enough Thread-Level Parallelism (TLP) in applications for using the increasing numbers of cores on a chip. Video decoding is an example of an application domain with increasing computational requirements every new generation. This is due, on the one hand, to the trend towards high quality video systems (high definition and frame rate, 3D displays, etc) that results in a continuous increase in the amount of data that has to be processed in real-time. On the other hand, there is the requirement to maintain high compression efficiency which is only possible with video codes like H.264/AVC that use advanced coding techniques. In this book, the parallelization of H.264/AVC decoding is presented as a case study of parallel programming. H.264/AVC decoding is an example of a complex application with many levels of dependencies, different kernels, and irregular data structures. The book presents a detailed methodology for parallelization of this type of applications. It begins with a description of the algorithm, an analysis of the data dependencies and an evaluation of the different parallelization strategies. Then the design and implementation of a novel parallelization approach is presented that is scalable to many core architectures. Experimental results on different parallel architectures are discussed in detail. Finally, an outlook is given on parallelization opportunities in the upcoming HEVC standard.

Scalable Pattern Recognition Algorithms: Applications in Computational Biology and Bioinformatics

by Pradipta Maji Sushmita Paul

This book addresses the need for a unified framework describing how soft computing and machine learning techniques can be judiciously formulated and used in building efficient pattern recognition models. The text reviews both established and cutting-edge research, providing a careful balance of theory, algorithms, and applications, with a particular emphasis given to applications in computational biology and bioinformatics. Features: integrates different soft computing and machine learning methodologies with pattern recognition tasks; discusses in detail the integration of different techniques for handling uncertainties in decision-making and efficiently mining large biological datasets; presents a particular emphasis on real-life applications, such as microarray expression datasets and magnetic resonance images; includes numerous examples and experimental results to support the theoretical concepts described; concludes each chapter with directions for future research and a comprehensive bibliography.

Scalable Performance Signalling and Congestion Avoidance

by Michael Welzl

This book answers a question which came about while the author was work­ ing on his diploma thesis [1]: would it be better to ask for the available band­ width instead of probing the network (like TCP does)? The diploma thesis was concerned with long-distance musical interaction ("NetMusic"). This is a very peculiar application: only a small amount of bandwidth may be necessary, but timely delivery and reduced loss are very important. Back then, these require­ ments led to a thorough investigation of existing telecommunication network mechanisms, but a satisfactory answer to the question could not be found. Simply put, the answer is "yes" - this work describes a mechanism which indeed enables an application to "ask for the available bandwidth". This obvi­ ously does not only concern online musical collaboration any longer. Among others, the mechanism yields the following advantages over existing alterna­ tives: • good throughput while maintaining close to zero loss and a small bottleneck queue length • usefulness for streaming media applications due to a very smooth rate • feasibility for satellite and wireless links • high scalability Additionally, a reusable framework for future applications that need to "ask the network" for certain performance data was developed.

Scalable Processing of Spatial-Keyword Queries (Synthesis Lectures on Data Management)

by Ahmed R. Mahmood Walid G. Aref

Text data that is associated with location data has become ubiquitous. A tweet is an example of this type of data, where the text in a tweet is associated with the location where the tweet has been issued. We use the term spatial-keyword data to refer to this type of data. Spatial-keyword data is being generated at massive scale. Almost all online transactions have an associated spatial trace. The spatial trace is derived from GPS coordinates, IP addresses, or cell-phone-tower locations. Hundreds of millions or even billions of spatial-keyword objects are being generated daily. Spatial-keyword data has numerous applications that require efficient processing and management of massive amounts of spatial-keyword data. This book starts by overviewing some important applications of spatial-keyword data, and demonstrates the scale at which spatial-keyword data is being generated. Then, it formalizes and classifies the various types of queries that execute over spatial-keyword data. Next, it discusses important and desirable properties of spatial-keyword query languages that are needed to express queries over spatial-keyword data. As will be illustrated, existing spatial-keyword query languages vary in the types of spatial-keyword queries that they can support. There are many systems that process spatial-keyword queries. Systems differ from each other in various aspects, e.g., whether the system is batch-oriented or stream-based, and whether the system is centralized or distributed. Moreover, spatial-keyword systems vary in the types of queries that they support. Finally, systems vary in the types of indexing techniques that they adopt. This book provides an overview of the main spatial-keyword data-management systems (SKDMSs), and classifies them according to their features. Moreover, the book describes the main approaches adopted when indexing spatial-keyword data in the centralized and distributed settings. Several case studies of {SKDMSs} are presented along with the applications and query types that these {SKDMSs} are targeted for and the indexing techniques they utilize for processing their queries. Optimizing the performance and the query processing of {SKDMSs} still has many research challenges and open problems. The book concludes with a discussion about several important and open research-problems in the domain of scalable spatial-keyword processing.

Scalable Shared-Memory Multiprocessing

by Daniel E. Lenoski Wolf-Dietrich Weber

Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

Scalable Shared Memory Multiprocessors

by Michel Dubois Shreekant S. Thakkar

The workshop on Scalable Shared Memory Multiprocessors took place on May 26 and 27 1990 at the Stouffer Madison Hotel in Seattle, Washington as a prelude to the 1990 International Symposium on Computer Architecture. About 100 participants listened for two days to the presentations of 22 invited The motivation for this workshop was to speakers, from academia and industry. promote the free exchange of ideas among researchers working on shared-memory multiprocessor architectures. There was ample opportunity to argue with speakers, and certainly participants did not refrain a bit from doing so. Clearly, the problem of scalability in shared-memory multiprocessors is still a wide-open question. We were even unable to agree on a definition of "scalability". Authors had more than six months to prepare their manuscript, and therefore the papers included in this proceedings are refinements of the speakers' presentations, based on the criticisms received at the workshop. As a result, 17 authors contributed to these proceedings. We wish to thank them for their diligence and care. The contributions in these proceedings can be partitioned into four categories 1. Access Order and Synchronization 2. Performance 3. Cache Protocols and Architectures 4. Distributed Shared Memory Particular topics on which new ideas and results are presented in these proceedings include: efficient schemes for combining networks, formal specification of shared­ memory models, correctness of trace-driven simulations,synchronization, various coherence protocols, .

Scalable Techniques for Formal Verification

by Sandip Ray

This book is about formal veri?cation, that is, the use of mathematical reasoning to ensure correct execution of computing systems. With the increasing use of c- puting systems in safety-critical and security-critical applications, it is becoming increasingly important for our well-being to ensure that those systems execute c- rectly. Over the last decade, formal veri?cation has made signi?cant headway in the analysis of industrial systems, particularly in the realm of veri?cation of hardware. A key advantage of formal veri?cation is that it provides a mathematical guarantee of their correctness (up to the accuracy of formal models and correctness of r- soning tools). In the process, the analysis can expose subtle design errors. Formal veri?cation is particularly effective in ?nding corner-case bugs that are dif?cult to detect through traditional simulation and testing. Nevertheless, and in spite of its promise, the application of formal veri?cation has so far been limited in an ind- trial design validation tool ?ow. The dif?culties in its large-scale adoption include the following (1) deductive veri?cation using theorem provers often involves - cessive and prohibitive manual effort and (2) automated decision procedures (e. g. , model checking) can quickly hit the bounds of available time and memory. This book presents recent advances in formal veri?cation techniques and d- cusses the applicability of the techniques in ensuring the reliability of large-scale systems. We deal with the veri?cation of a range of computing systems, from - quential programsto concurrentprotocolsand pipelined machines.

Scalable Uncertainty Management: 13th International Conference, SUM 2019, Compiègne, France, December 16–18, 2019, Proceedings (Lecture Notes in Computer Science #11940)

by Nahla Ben Amor Benjamin Quost Martin Theobald

This book constitutes the refereed proceedings of the 13th International Conference on Scalable Uncertainty Management, SUM 2019, which was held in Compiègne, France, in December 2019. The 25 full, 4 short, 4 tutorial, 2 invited keynote papers presented in this volume were carefully reviewed and selected from 44 submissions. The conference is dedicated to the management of large amounts of complex, uncertain, incomplete, or inconsistent information. New approaches have been developed on imprecise probabilities, fuzzy set theory, rough set theory, ordinal uncertainty representations, or even purely qualitative models.

Scalable Uncertainty Management: 9th International Conference, SUM 2015, Québec City, QC, Canada, September 16-18, 2015. Proceedings (Lecture Notes in Computer Science #9310)

by Christoph Beierle Alex Dekhtyar

This book constitutes the refereed proceedings of the 9th International Conference on Scalable Uncertainty Management, SUM 2015, held in Québec City, QC, Canada, in September 2015. The 25 regular papers and 3 short papers were carefully reviewed and selected from 49 submissions. The call for papers for SUM 2015 solicited submissions in all areas of managing and reasoning with substantial and complex kinds of uncertain, incomplete or inconsistent information. These include applications in decision support systems, risk analysis, machine learning, belief networks, logics of uncertainty, belief revision and update, argumentation, negotiation technologies, semantic web applications, search engines, ontology systems, information fusion, information retrieval, natural language processing, information extraction, image recognition, vision systems, data and text mining, and the consideration of issues such as provenance, trust, heterogeneity, and complexity of data and knowledge.

Scalable Uncertainty Management: 5th International Conference, SUM 2011, Dayton, OH, USA, October 10-13, 2011, Proceedings (Lecture Notes in Computer Science #6929)

by Salem Benferhat John Grant

This book constitutes the refereed proceedings of the 5th International Conference on Scalable Uncertainty Management, SUM 2011, held in Dayton, OH, USA, in October 2011. The 32 revised full papers and 3 revised short papers presented together with the abstracts of 2 invited talks and 6 “discussant” contributions were carefully reviewed and selected from 58 submissions. The papers are organized in topical sections on argumentation systems, probabilistic inference, dynamic of beliefs, information retrieval and databases, ontologies, possibility theory and classification, logic programming, and applications.

Scalable Uncertainty Management: 12th International Conference, SUM 2018, Milan, Italy, October 3-5, 2018, Proceedings (Lecture Notes in Computer Science #11142)

by Davide Ciucci Gabriella Pasi Barbara Vantaggi

This book constitutes the refereed proceedings of the 12th International Conference on Scalable Uncertainty Management, SUM 2018, which was held in Milan, Italy, in October 2018. The 23 full, 6 short papers and 2 tutorials presented in this volume were carefully reviewed and selected from 37 submissions. The conference is dedicated to the management of large amounts of complex, uncertain, incomplete, or inconsistent information. New approaches have been developed on imprecise probabilities, fuzzy set theory, rough set theory, ordinal uncertainty representations, or even purely qualitative models.

Scalable Uncertainty Management: 14th International Conference, SUM 2020, Bozen-Bolzano, Italy, September 23–25, 2020, Proceedings (Lecture Notes in Computer Science #12322)

by Jesse Davis Karim Tabia

This book constitutes the refereed proceedings of the 14th International Conference on Scalable Uncertainty Management, SUM 2020, which was held in Bozen-Bolzano, Italy, in September 2020. The 12 full, 7 short papers presented in this volume were carefully reviewed and selected from 30 submissions. Besides that, the book also contains 2 abstracts of invited talks, 2 tutorial papers, and 2 PhD track papers. The conference aims to gather researchers with a common interest in managing and analyzing imperfect information from a wide range of fields, such as artificial intelligence and machine learning, databases, information retrieval and data mining, the semantic web and risk analysis. Due to the Corona pandemic SUM 2020 was held as an virtual event.

Scalable Uncertainty Management: 4th International Conference, SUM 2010, Toulouse, France, September 27-29, 2010, Proceedings (Lecture Notes in Computer Science #6379)

by Amol Deshpande Anthony Hunter

Managing uncertainty and inconsistency has been extensively explored in - ti?cial Intelligence over a number of years. Now with the advent of massive amounts of data and knowledge from distributed heterogeneous,and potentially con?icting, sources, there is interest in developing and applying formalisms for uncertainty andinconsistency widelyin systems that need to better managethis data and knowledge. The annual International Conference on Scalable Uncertainty Management (SUM) has grown out of this wide-ranging interest in managing uncertainty and inconsistency in databases, the Web, the Semantic Web, and AI. It aims at bringing together all those interested in the management of large volumes of uncertainty and inconsistency, irrespective of whether they are in databases,the Web, the Semantic Web, or in AI, as well as in other areas such as information retrieval, risk analysis, and computer vision, where signi?cant computational - forts are needed. After a promising First International Conference on Scalable Uncertainty Management was held in Washington DC, USA in 2007, the c- ference series has been successfully held in Napoli, Italy, in 2008, and again in Washington DC, USA, in 2009.

Refine Search

Showing 71,176 through 71,200 of 85,186 results