Browse Results

Showing 49,401 through 49,425 of 85,160 results

Lineare Kirchhoff-Netzwerke: Grundlagen, Analyse und Synthese

by Reiner Thiele

Das Buch vermittelt ausgehend von den Grundlagen der Netzwerk-Theorie neuartige Analyse- und Syntheseverfahren für lineare zeitinvariante Kirchhoff-Netzwerke. Hierzu verwendet der Autor als Elementarnetzwerke gewöhnliche Widerstände, Kondensatoren und Spulen sowie die sogenannten pathologischen Unternetzwerke Nullator, Norator und Nullor. Der Nullor besteht dabei aus einem Nullator und einem Norator, wird hinsichtlich seines Klemmenverhaltens durch die Belevitch-Darstellung beschrieben und näherungsweise durch einen Operationsverstärker realisiert. Zur Analyse oder Synthese erfolgt die Zerlegung in realisierbare Unternetzwerke mit dem Verfahren der Singulärwert-Zerlegung von Matrizen. Außerdem zeigt Reiner Thiele, wie durch die Applikation von Klemmen-Äquivalenzen praxisrelevante elektrische oder elektronische Schaltungen entstehen.

Lineare Systeme und Netzwerke: Eine Einführung (Hochschultext)

by Helmuth Wolf

Lineares Optimieren: Maximierung — Minimierung (Vieweg-Programmbibliothek Mikrocomputer #14)

by Herbert Mai

Dieser Band der Vieweg Programmbibliothek beschäftigt sich mit der Anwendung unter­ schiedlicher Varianten des Simplexverfahrens bei der Lösung linearer Ungleichungs· und/ oder Gleichungssysteme, wie sie bei der mathematischen Behandlung von Planungsvorbe· reitungen und Entscheidungsfindungen eingesetzt werden. Durch die Einbeziehung von Taschencomputern sollen auch umfangreichere Aufgaben zuverlässig rechenbar gemacht werden. Der Band wendet sich in erster Linie an Schüler und Studenten, für deren Bedürfnisse die Kapazität leistungsstarker, programmierbarer Taschenrechner ausreicht. Die hier vorge­ stellten Programme sind für den Hewlett-Packard HP-41 in der Ausstattung mit Ouad­ Modul und Magnetkartenleser entwickelt worden. Um dem Leser das Nachvollziehen der Programme zu erleichtern, sind diese so gehalten, daß die Veränderungen von einem Programm zum anderen möglichst gering sind. Es soll damit auch ein Weg aufgezeigt werden, wie man von zunächst recht einfachen Programmen zu aufwendigeren Lösungsverfahren gelangt. Für Leser, die Besitzer anderer Taschenrech­ ner oder Kleincomputer sind, werden die Beschreibungen der Rechenverfahren so gewählt, daß auch sie leicht eigene Programme zu den hier vorgestellten Verfahren schreiben können. Zudem soll dieser Band eine Anregung darstellen, die Programme für die eigenen Bedürf· nisse zu variieren und auch andere Verfahren der linearen Optimierung zu programmieren. Der Verfasser bietet mit der programmierten Lösung zu einfachen Anwendungen der linearen Programmierung einen interessanten Einstieg in dieses zunehmend wichtiger werdende Fachgebiet. Es wird besonderer Wert auf das Verständnis des mathematischen Hintergrundes gelegt. Die Herausgeber Inhaltsverzeichnis 1 Einleitung .............................................. .

Lines and Curves: A Practical Geometry Handbook

by Victor Gutenmacher N.B. Vasilyev

Broad appeal to undergraduate teachers, students, and engineers; Concise descriptions of properties of basic planar curves from different perspectives; useful handbook for software engineers; A special chapter---"Geometry on the Web"---will further enhance the usefulness of this book as an informal tutorial resource.; Good mathematical notation, descriptions of properties of lines and curves, and the illustration of geometric concepts facilitate the design of computer graphics tools and computer animation.; Video game designers, for example, will find a clear discussion and illustration of hard-to-understand trajectory design concepts.; Good supplementary text for geometry courses at the undergraduate and advanced high school levels

Linguistic and Cultural Studies: Proceedings of the XVIIth International Conference on Linguistic and Cultural Studies (LKTI 2017), October 11-13, 2017, Tomsk, Russia (Advances in Intelligent Systems and Computing #677)

by Andrey Filchenko Zhanna Anikina

This book features contributions to the XVIIth International Conference “Linguistic and Cultural Studies: Traditions and Innovations” (LKTI 2017), providing insights into theory, research, scientific achievements, and best practices in the fields of pedagogics, linguistics, and language teaching and learning with a particular focus on Siberian perspectives and collaborations between academics from other Russian regions. Covering topics including curriculum development, designing and delivering courses and vocational training, the book is intended for academics working at all levels of education striving to improve educational environments in their context – school, tertiary education and continuous professional development.

Linguistic Concepts and Methods in CSCW (Computer Supported Cooperative Work)

by John H. Connolly Lyn Pemberton

Linguistic Concepts and Methods in CSCW is the first book devoted to the innovative new area of research in CSCW. It concentrates on the use of language in context - the area most widely researched in conjunction with CSCW - but also examines grammatical construction, semantics and the significance of the spoken, written and graphic mediums. A variety of other related topics, such as sociolinguistics, stylistics, psycholinguistics, computational linguistics, and applied linguistics are also covered. This book will be of interest to researchers in CSCW, linguistics and computational linguistics. It will also provide invaluable reading for industrial and commercial researchers who are interested in the implications of such research for the design of marketable systems.

Linguistic Decision Making: Theory and Methods

by Zeshui Xu

This book provides a systematic introduction to linguistic aggregation operators, linguistic preference relations, and various models and approaches to multi-attribute decision making with linguistic information. Offers practical examples, tables and figures.

Linguistic Expressions and Semantic Processing: A Practical Approach

by Alastair Butler

This book introduces formal semantics techniques for a natural language processing audience. Methods discussed involve: (i) the denotational techniques used in model-theoretic semantics, which make it possible to determine whether a linguistic expression is true or false with respect to some model of the way things happen to be; and (ii) stages of interpretation, i.e., ways to arrive at meanings by evaluating and converting source linguistic expressions, possibly with respect to contexts, into output (logical) forms that could be used with (i). The book demonstrates that the methods allow wide coverage without compromising the quality of semantic analysis. Access to unrestricted, robust and accurate semantic analysis is widely regarded as an essential component for improving natural language processing tasks, such as: recognizing textual entailment, information extraction, summarization, automatic reply, and machine translation.

Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax (Synthesis Lectures on Human Language Technologies)

by Emily M. Bender

Many NLP tasks have at their core a subtask of extracting the dependencies—who did what to whom—from natural language sentences. This task can be understood as the inverse of the problem solved in different ways by diverse human languages, namely, how to indicate the relationship between different parts of a sentence. Understanding how languages solve the problem can be extremely useful in both feature design and error analysis in the application of machine learning to NLP. Likewise, understanding cross-linguistic variation can be important for the design of MT systems and other multilingual applications. The purpose of this book is to present in a succinct and accessible fashion information about the morphological and syntactic structure of human languages that can be useful in creating more linguistically sophisticated, more language-independent, and thus more successful NLP systems. Table of Contents: Acknowledgments / Introduction/motivation / Morphology: Introduction / Morphophonology / Morphosyntax / Syntax: Introduction / Parts of speech / Heads, arguments, and adjuncts / Argument types and grammatical functions / Mismatches between syntactic position and semantic roles / Resources / Bibliography / Author's Biography / General Index / Index of Languages

Linguistic Fundamentals for Natural Language Processing II: 100 Essentials from Semantics and Pragmatics (Synthesis Lectures on Human Language Technologies)

by Emily M. Bender

Meaning is a fundamental concept in Natural Language Processing (NLP), in the tasks of both Natural Language Understanding (NLU) and Natural Language Generation (NLG). This is because the aims of these fields are to build systems that understand what people mean when they speak or write, and that can produce linguistic strings that successfully express to people the intended content. In order for NLP to scale beyond partial, task-specific solutions, researchers in these fields must be informed by what is known about how humans use language to express and understand communicative intents. The purpose of this book is to present a selection of useful information about semantics and pragmatics, as understood in linguistics, in a way that's accessible to and useful for NLP practitioners with minimal (or even no) prior training in linguistics.

Linguistic Fuzzy Logic Methods in Social Sciences (Studies in Fuzziness and Soft Computing #253)

by Badredine Arfi

The modern origin of fuzzy sets, fuzzy algebra, fuzzy decision making, and “computing with words” is conventionally traced to Lotfi Zadeh’s publication in 1965 of his path-breaking refutation of binary set theory. In a sixteen-page article, modestly titled “Fuzzy Sets” and published in the journal Information and Control, Zadeh launched a multi-disciplinary revolution. The start was relatively slow, but momentum gathered quickly. From 1970 to 1979 there were about 500 journal publications with the word fuzzy in the title; from 2000 to 2009 there were more than 35,000. At present, citations to Zadeh’s publications are running at a rate of about 1,500-2,000 per year, and this rate continues to rise. Almost all applications of Zadeh’s ideas have been in highly technical scientific fields, not in the social sciences. Zadeh was surprised by this development. In a personal note he states: “When I wrote my l965 paper, I expected that fuzzy set theory would be applied primarily in the realm of human sciences. Contrary to my expectation, fuzzy set theory and fuzzy logic are applied in the main in physical and engineering sciences.” In fact, the first comprehensive examination of fuzzy sets by a social scientist did not appear until 1987, a full twenty-two years after the publication of Zadeh’s seminal article, when Michael Smithson, an Australian psychologist, published Fuzzy Set Analysis for Behavioral and Social Sciences.

Linguistic Geometry: From Search to Construction (Operations Research/Computer Science Interfaces Series #13)

by Boris Stilman

Linguistic Geometry: From Search to Construction is the first book of its kind. Linguistic Geometry (LG) is an approach to the construction of mathematical models for large-scale multi-agent systems. A number of such systems, including air/space combat, robotic manufacturing, software re-engineering and Internet cyberwar, can be modeled as abstract board games. These are games with moves that can be represented by the movement of abstract pieces over locations on an abstract board. The purpose of LG is to provide strategies to guide the games' participants to their goals. Traditionally, discovering such strategies required searches in giant game trees. These searches are often beyond the capacity of modern and even conceivable future computers. LG dramatically reduces the size of the search trees, making the problems computationally tractable. LG provides a formalization and abstraction of search heuristics used by advanced experts including chess grandmasters. Essentially, these heuristics replace search with the construction of strategies. To formalize the heuristics, LG employs the theory of formal languages (i.e. formal linguistics), as well as certain geometric structures over an abstract board. The new formal strategies solve problems from different domains far beyond the areas envisioned by the experts. For a number of these domains, Linguistic Geometry yields optimal solutions.

Linguistic Identity Matching

by Bertrand Lisbach Victoria Meyer

Regulation, risk awareness and technological advances are more and more drawing identity search requirements into business, security and data management processes. Following years of struggling with computational techniques, the new linguistic identity matching approach finally offers an appropriate way for such processes to balance the risk of missing a personal match with the costs of overmatching. The new paradigm for identity searches focuses on understanding the influences that languages, writing systems and cultural conventions have on person names. A must-read for anyone involved in the purchase, design or study of identity matching systems, this book describes how linguistic and onomastic knowledge can be used to create a more reliable and precise identity search.

Linguistic Linked Data: Representation, Generation and Applications

by Philipp Cimiano Christian Chiarcos John P. McCrae Jorge Gracia

This is the first monograph on the emerging area of linguistic linked data. Presenting a combination of background information on linguistic linked data and concrete implementation advice, it introduces and discusses the main benefits of applying linked data (LD) principles to the representation and publication of linguistic resources, arguing that LD does not look at a single resource in isolation but seeks to create a large network of resources that can be used together and uniformly, and so making more of the single resource.The book describes how the LD principles can be applied to modelling language resources. The first part provides the foundation for understanding the remainder of the book, introducing the data models, ontology and query languages used as the basis of the Semantic Web and LD and offering a more detailed overview of the Linguistic Linked Data Cloud. The second part of the book focuses on modelling language resources using LD principles, describing how to model lexical resources using Ontolex-lemon, the lexicon model for ontologies, and how to annotate and address elements of text represented in RDF. It also demonstrates how to model annotations, and how to capture the metadata of language resources. Further, it includes a chapter on representing linguistic categories. In the third part of the book, the authors describe how language resources can be transformed into LD and how links can be inferred and added to the data to increase connectivity and linking between different datasets. They also discuss using LD resources for natural language processing. The last part describes concrete applications of the technologies: representing and linking multilingual wordnets, applications in digital humanities and the discovery of language resources.Given its scope, the book is relevant for researchers and graduate students interested in topics at the crossroads of natural language processing / computational linguistics and the Semantic Web / linked data. It appeals to Semantic Web experts who are not proficient in applying the Semantic Web and LD principles to linguistic data, as well as to computational linguists who are used to working with lexical and linguistic resources wanting to learn about a new paradigm for modelling, publishing and exploiting linguistic resources.

Linguistic Linked Open Data: 12th EUROLAN 2015 Summer School and RUMOUR 2015 Workshop, Sibiu, Romania, July 13-25, 2015, Revised Selected Papers (Communications in Computer and Information Science #588)

by Diana Trandabăţ Daniela Gîfu

This book constitutes the refereed proceedings of the 12th EUROLAN Summer School on Linguistic Linked Open Data and its Satellite Workshop on Social Media and the Web of Linked Data, RUMOUR 2015, held in Sibiu, Romania, in July 2015. The 10 revised full papers presented together with 12 abstracts of tutorials were carefully reviewed and selected from 21 submissions.

Linguistic Methods Under Fuzzy Information in System Safety and Reliability Analysis (Studies in Fuzziness and Soft Computing #414)

by Mohammad Yazdi

This book reviews and presents a number of approaches to Fuzzy-based system safety and reliability assessment. For each proposed approach, it provides case studies demonstrating their applicability, which will enable readers to implement them into their own risk analysis process.The book begins by giving a review of using linguistic terms in system safety and reliability analysis methods and their extension by fuzzy sets. It then progresses in a logical fashion, dedicating a chapter to each approach, including the 2-tuple fuzzy-based linguistic term set approach, fuzzy bow-tie analysis, optimizing the allocation of risk control measures using fuzzy MCDM approach, fuzzy sets theory and human reliability, and emergency decision making fuzzy-expert aided disaster management system.This book will be of interest to professionals and researchers working in the field of system safety and reliability, as well as postgraduate and undergraduate students studying applications of fuzzy systems.

Linguistic Modeling of Information and Markup Languages: Contributions to Language Technology (Text, Speech and Language Technology #40)

by Andreas Witt Dieter Metzing

This book covers recent developments in the field, from multi-layered mark-up and standards to theoretical formalisms to applications. It presents results from international research in text technology, computational linguistics, hypertext modeling and more.

Linguistic Resources for Natural Language Processing: On the Necessity of Using Linguistic Methods to Develop NLP Software

by Max Silberztein

Empirical — data-driven, neural network-based, probabilistic, and statistical — methods seem to be the modern trend. Recently, OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney chatbots have been garnering a lot of attention for their detailed answers across many knowledge domains. In consequence, most AI researchers are no longer interested in trying to understand what common intelligence is or how intelligent agents construct scenarios to solve various problems. Instead, they now develop systems that extract solutions from massive databases used as cheat sheets. In the same manner, Natural Language Processing (NLP) software that uses training corpora associated with empirical methods are trendy, as most researchers in NLP today use large training corpora, always to the detriment of the development of formalized dictionaries and grammars.Not questioning the intrinsic value of many software applications based on empirical methods, this volume aims at rehabilitating the linguistic approach to NLP. In an introduction, the editor uncovers several limitations and flaws of using training corpora to develop NLP applications, even the simplest ones, such as automatic taggers.The first part of the volume is dedicated to showing how carefully handcrafted linguistic resources could be successfully used to enhance current NLP software applications. The second part presents two representative cases where data-driven approaches cannot be implemented simply because there is not enough data available for low-resource languages. The third part addresses the problem of how to treat multiword units in NLP software, which is arguably the weakest point of NLP applications today but has a simple and elegant linguistic solution.It is the editor's belief that readers interested in Natural Language Processing will appreciate the importance of this volume, both for its questioning of the training corpus-based approaches and for the intrinsic value of the linguistic formalization and the underlying methodology presented.

Linguistic Structure Prediction (Synthesis Lectures on Human Language Technologies)

by Noah A. Smith

A major part of natural language processing now depends on the use of text data to build linguistic analyzers. We consider statistical, computational approaches to modeling linguistic structure. We seek to unify across many approaches and many kinds of linguistic structures. Assuming a basic understanding of natural language processing and/or machine learning, we seek to bridge the gap between the two fields. Approaches to decoding (i.e., carrying out linguistic structure prediction) and supervised and unsupervised learning of models that predict discrete structures as outputs are the focus. We also survey natural language processing problems to which these methods are being applied, and we address related topics in probabilistic inference, optimization, and experimental methodology. Table of Contents: Representations and Linguistic Data / Decoding: Making Predictions / Learning Structure from Annotated Data / Learning Structure from Incomplete Data / Beyond Decoding: Inference

LINGUISTIC VALUES BASED INTELLIGENT INFORMATION PROCESSING (Atlantis Computational Intelligence Systems #1)

by Pei Zheng Ruan Da

Humans employ mostly natural languages in describing and representing problems, c- puting and reasoning, arriving at ?nal conclusions described similarly as words in a natural language or as the form of mental perceptions. To make machines imitate humans’ mental activities, the key point in terms of machine intelligence is to process uncertain information by means of natural languages with vague and imprecise concepts. Zadeh (1996a) proposed a concept of Computing with Words (CWW) to model and c- pute with linguistic descriptions that are propositions drawn from a natural language. CWW, followed the concept of linguistic variables (Zadeh, 1975a,b) and fuzzy sets (Zadeh, 1965), has been developed intensively and opened several new vast research ?elds as well as applied in various areas, particularly in the area of arti?cial intelligence. Zadeh (1997, 2005) emphasized that the core conceptions in CWW are linguistic variables and fuzzy logic (or approximate reasoning). In a linguistic variable, each linguistic value is explained by a fuzzy set (also called semantics of the linguistic value), its membership function is de?ned on the universe of discourse of the linguistic variable. By fuzzy sets, linguistic information or statements are quanti?ed by membership functions, and infor- tion propagation is performed by approximate reasoning. The use of linguistic variables implies processes of CWW such as their fusion, aggregation, and comparison. Different computational approaches in the literature addressed those processes (Wang, 2001; Zadeh and Kacprzyk, 1999a, b). Membership functions are generally at the core of many fuzzy-set theories based CWW.

Linguistics across Disciplinary Borders: The March of Data (Language, Data Science and Digital Humanities)

by Steven Coats and Veronika Laippala

This volume highlights the ways in which recent developments in corpus linguistics and natural language processing can engage with topics across language studies, humanities and social science disciplines.New approaches have emerged in recent years that blur disciplinary boundaries, facilitated by factors such as the application of computational methods, access to large data sets, and the sharing of code, as well as continual advances in technologies related to data storage, retrieval, and processing. The “march of data” denotes an area at the border region of linguistics, humanities, and social science disciplines, but also the inevitable development of the underlying technologies that drive analysis in these subject areas.Organized into 3 sections, the chapters are connected by the underlying thread of linguistic corpora: how they can be created, how they can shed light on varieties or registers, and how their metadata can be utilized to better understand the internal structure of similar resources. While some chapters in the volume make use of well-established existing corpora, others analyze data from platforms such as YouTube, Twitter or Reddit. The volume provides insight into the diversity of methods, approaches, and corpora that inform our understanding of the “border regions” between the realms of data science, language/linguistics, and social or cultural studies.

Linguistics across Disciplinary Borders: The March of Data (Language, Data Science and Digital Humanities)


This volume highlights the ways in which recent developments in corpus linguistics and natural language processing can engage with topics across language studies, humanities and social science disciplines.New approaches have emerged in recent years that blur disciplinary boundaries, facilitated by factors such as the application of computational methods, access to large data sets, and the sharing of code, as well as continual advances in technologies related to data storage, retrieval, and processing. The “march of data” denotes an area at the border region of linguistics, humanities, and social science disciplines, but also the inevitable development of the underlying technologies that drive analysis in these subject areas.Organized into 3 sections, the chapters are connected by the underlying thread of linguistic corpora: how they can be created, how they can shed light on varieties or registers, and how their metadata can be utilized to better understand the internal structure of similar resources. While some chapters in the volume make use of well-established existing corpora, others analyze data from platforms such as YouTube, Twitter or Reddit. The volume provides insight into the diversity of methods, approaches, and corpora that inform our understanding of the “border regions” between the realms of data science, language/linguistics, and social or cultural studies.

Linguistische Datenverarbeitung: Ein Lehrbuch

by Winfried Lenders

Linguistisches Identity Matching: Paradigmenwechsel in der Suche und im Abgleich von Personendaten

by Bertrand Lisbach

Identity Matching ist die Grundlage für die Suche mit und nach Personendaten. Und die betreibt heutzutage die ganze Welt: Banken suchen Geldwäscher in ihren Kundendateien, Polizeibehörden überprüfen Verdächtige mit ihren Registern und Privatpersonen stöbern alte Bekannte im Web auf. Mittels Identity Matching besorgen sich Studenten Fachartikel, Journalisten Nachrichten, Vermieter Bonitätsauskünfte und Verkäufer ihre nächsten Marketing-Opfer. Das Problem bisher ist: Sobald wir den Namen nicht genau so schreiben, wie er in der Quelle repräsentiert ist, finden wir ihn nicht. Jetzt hebt die Linguistik das Identity Matching auf ein neues Niveau. Mit dem Wissen über Sprachen, Schriften und globale Namenskonventionen ist eine zugleich präzise und zuverlässige Personensuche möglich. Dieses Buch beschreibt, was linguistisches Identity Matching ist, und gibt Ihnen praktische Tipps, wie auch Sie davon profitieren können.

Link: How Decision Intelligence Connects Data, Actions, and Outcomes for a Better World

by Lorien Pratt

Why aren't the most powerful new technologies being used to solve the world's most important problems: hunger, poverty, conflict, inequality, employment, disease? What's missing? From a pioneer in Artificial Intelligence and Machine Learning comes a thought-provoking book that answers these questions. In Link: How Decision Intelligence Connects Data, Actions, and Outcomes for a Better World, Dr. Lorien Pratt explores the solution that is emerging worldwide to take Artificial Intelligence to the next level: Decision Intelligence. Decision Intelligence (DI) goes beyond AI as well, connecting human decision makers in multiple areas like economics, optimization, big data, analytics, psychology, simulation, game theory, and more. Yet despite the sophistication of these approaches, Link shows how they can be used by you and me: connecting us in a way that supercharges our ability to meet the interconnected challenges of our age. Pratt tells the stories of decision intelligence pioneers worldwide, along with examples of their work in areas that include government budgeting, space exploration, emerging democracy conflict resolution, banking, leadership, and much more. Link delivers practical examples of how DI connects people to computers and to each other to help us solve complex interconnected problems. Link explores a variety of scenarios that show readers how to design solutions that change the way problems are considered, data is analyzed, and technologies work together with people. Technology and academics has accelerated beyond our ability to understand or effectively control them. Link brings technology down to earth and connects it to our more natural ways of thinking. It offers a roadmap to the future, empowering us all to make practical steps and take the best actions to solve the hardest problems.

Refine Search

Showing 49,401 through 49,425 of 85,160 results