Automated text summarization through machine learning can be an extremely valuable tool to increase efficiency in both our everyday life and professional endeavors if the important information in a document can be extracted and accurately summarized.
The use of Twitter and natural language processing opens up a promising new approach to flu surveillance. Such data-driven methods produce encouraging results and provide a faster way to identify flu surges.
Further, these Twitter-based methods can be very easily applied to numerous other domains such as Marketing, for identifying geospatial trends in brand image, as well as in Urban Planning for analyzing public attitudes towards various spaces and landmarks for example.
This blog focuses on developing an algorithm to understand spoken Arabic digits. Such algorithms are the first step in developing computers that can understand languages with applications ranging from text-to-speech, voice-recognition, and translation, to modern AI assistants that are widely becoming available.
Document classification is currently one of the most important branches of Natural Language Processing (NLP). The general idea is to automatically classify documents into categories using machine learning algorithms.
The applications are almost endless, we can classify: patient records, movie reviews, webpages, emails (spam vs not spam), and in fact anything text based.
Natural language processing has been used in speech recognition, spell-checking, document classification, and more. Moreover, it's a stepping stone to developing strong AI, one which can intelligently parse information given to it better than a human.