One-stop solution for NLP practitioners, ML developers and data scientists to build effective NLP systems that can perform real-world complicated tasks
Key Features
Implement deep learning algorithms such as BiLSTMS, CRFs, and many more using TensorFlow 2
Explore classical NLP techniques and libraries including parts-of-speech tagging and tokenization
Learn practical applications of NLP covering the forefronts of the field like sentiment analysis and generating text
Book Description
In the last couple of years, there have been tremendous advances in natural language processing, and we are now moving from research labs into practical applications. Advanced Natural Language Processing comes with a perfect blend of both the theoretical and practical aspects of trending and complex NLP techniques.
This book is focused on innovative applications in the field of NLP, language generation, and dialogue systems. It goes into the details of applying the concepts of text pre-processing using techniques such as tokenization, parts of speech tagging, and lemmatization using popular libraries such as Stanford NLP and SpaCy. Named Entity Recognition (NER), a cornerstone of task-oriented bots, is built from scratch using Conditional Random Fields and Viterbi Decoding on top of RNNs.
Taking a practical and application-focused perspective, the book covers key emerging areas such as generating text for use in sentence completion and text summarization, bridging images and text by generating captions for images, and managing dialogue aspects of chatbot design. It also covers one of the most important reasons behind recent advances in NLP - applying transfer learning and fine-tuning using TensorFlow 2.
Further, it covers practical techniques that can simplify the labelling of textual data which otherwise proves to be a costly affair. The book also has a working code for each tech piece so that you can adapt them to your use cases.
By the end of this TensorFlow book, you will have an advanced knowledge of the tools, techniques and deep learning architecture used to solve complex NLP problems.
What You Will Learn
Grasp important pre-steps in building NLP applications like POS tagging
Deal with vast amounts of unlabeled and small labelled Datasets in NLP
Use transfer and weakly supervised learning using libraries like Snorkel
Perform sentiment analysis using BERT
Apply encoder-decoder NN architectures and beam search for summarizing text
Use transformer models with attention to bring images and text together
Build applications that generate captions and answer questions about images
Use advanced TensorFlow techniques like learning rate annealing, custom layers, and custom loss functions to build the latest deep NLP models
Who this book is for
This is not an introductory book and assumes the reader is familiar with basics of NLP and has fundamental Python skills, as well as basic knowledge of machine learning and undergraduate-level calculus and linear algebra.
The readers who can benefit the most from this book include:
Intermediate ML developers who are familiar with the basics of supervised learning and deep learning techniques
Professionals who already use TensorFlow/Python for purposes such as data science, ML, research, and analysis
Автор: Kamath, Uday, Название: Transformers for machine learning : ISBN: 0367767341 ISBN-13(EAN): 9780367767341 Издательство: Taylor&Francis Рейтинг: Цена: 6889.00 р. Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: Transformers are becoming a core part of many neural network architectures, employed in a wide range of applications such as NLP, Speech Recognition, Time Series, and Computer Vision. Transformers have gone through many adaptations and alterations, resulting in newer techniques and methods. This is the first comprehensive book on transformers.
Автор: Minkner Название: The Technology of Instrument Transformers ISBN: 3658348623 ISBN-13(EAN): 9783658348625 Издательство: Springer Рейтинг: Цена: 9083.00 р. Наличие на складе: Поставка под заказ.
Описание: Existing instrument transformer technologies as well as new measuring principles for current and voltage measurement are described in this book. The properties of conventional current and voltage transformer as well as the dimensioning are discussed in details out of the long experience of the authors. Especially the dielectric dimensioning and the used materials are discussed. Beside this an overview over new modern measuring principles is given and the technology of low-power instrument transformer, and RC-dividers are shown.
Автор: Lin Название: Pretrained Transformers for Text Ranking ISBN: 3031010531 ISBN-13(EAN): 9783031010538 Издательство: Springer Рейтинг: Цена: 11179.00 р. Наличие на складе: Есть у поставщика Поставка под заказ.
Описание: The goal of text ranking is to generate an ordered list of texts retrieved from a corpus in response to a query. Although the most common formulation of text ranking is search, instances of the task can also be found in many natural language processing (NLP) applications.This book provides an overview of text ranking with neural network architectures known as transformers, of which BERT (Bidirectional Encoder Representations from Transformers) is the best-known example. The combination of transformers and self-supervised pretraining has been responsible for a paradigm shift in NLP, information retrieval (IR), and beyond. This book provides a synthesis of existing work as a single point of entry for practitioners who wish to gain a better understanding of how to apply transformers to text ranking problems and researchers who wish to pursue work in this area. It covers a wide range of modern techniques, grouped into two high-level categories: transformer models that perform reranking in multi-stage architectures and dense retrieval techniques that perform ranking directly. Two themes pervade the book: techniques for handling long documents, beyond typical sentence-by-sentence processing in NLP, and techniques for addressing the tradeoff between effectiveness (i.e., result quality) and efficiency (e.g., query latency, model and index size). Although transformer architectures and pretraining techniques are recent innovations, many aspects of how they are applied to text ranking are relatively well understood and represent mature techniques. However, there remain many open research questions, and thus in addition to laying out the foundations of pretrained transformers for text ranking, this book also attempts to prognosticate where the field is heading.
ООО "Логосфера " Тел:+7(495) 980-12-10 www.logobook.ru