Machine Learning (II)
General details of the subject
- Face-to-face degree course
Description and contextualization of the subjectThe machine learning discipline is based on a set of techniques for data modeling that arise from artificial intelligence and statistics areas. These models are learned from data, and commonly used for classification and/or description purposes.
The machine learning field has lived an exponential protagonism increase in different application areas such as bioinformatics, industry, finance, and natural language processing.
The course is focused on the study of the principal tools in a classifical “data analysis pipeline”: data preprocessing, feature selection, learning scenarios, evaluation and comparison. The techniques are illustrated by the use of powerful machine learning software, and applied over different natural language processing problems.
|INZA CANO, IÑAKI||University of the Basque Country||Profesorado Agregado||Doctor||Bilingual||Science of Computation and Artificial Intelligencefirstname.lastname@example.org|
|Learn skills to deal with strategies and tools for natural language processing.||30.0 %|
|Learn skills to deal with machine learning methods that analyze text corpora.||70.0 %|
|Type||Face-to-face hours||Non face-to-face hours||Total hours|
|Applied laboratory-based groups||20||30||50|
|Name||Hours||Percentage of classroom teaching|
|Computer work practice, laboratory, site visits, field trips, external visits||50.0||40 %|
|Name||Minimum weighting||Maximum weighting|
|Practical tasks||0.0 %||100.0 %|
Learning outcomes of the subjectIdentify the principal machine learning scenarios: differences and similarities.
Identification of the adequate machine learning technique to be applied in a specific machine learning scenario.
Learn the basic, standard steps of a classic machine learning ¿pipeline¿.
Acquire skills in the use of R-project libraries to create a ¿document-term¿ matrix from a corpus and apply machine learning techniques over it.
Ordinary call: orientations and renunciationContinuous evaluation:
First, it is needed that the student attends, at least, 80% of the sessions. The evalution consists in an individual project, resumed in he following lines:
Starting from raw text (e.g. tweets or comments in social networks, html text, a set of text files, etc.), it is needed to import an reate a corpus. The corpus needs to be based in a supervised problem, composed of texts-documents with differente labels. The corpus will be preprocessed with basic text-mining filters (e.g. removing stop-words, stemming, removing of sparse terms, etc.). R-project's “tm” (“text-mining”) package will be used for this purpose. Corpus will be transformed to a matrix-format, in order to be processed by machine learning specialized software, in our case, popular R's “caret” package. A classical supervised pipeline will be applied, consisting at least in the following steps: load and data exploration, variables' preprocessing, corpus partition for validation, feature extraction and selection, application of class-imbalance techniques, learning and tuning of classification models, statistical comparison.
The output of the project will be a “notebook”, which alternates the implemented code with description of its functionalities and design decisions taken.
Individual project: when the student can't attend the lessons and he/she asks for a single final evaluation, this will consist in the development of the individual project previously exposed.
Extraordinary call: orientations and renunciationIndividual project: when the student can't attend the lessons and he/she asks for a single final evaluation, this will consist in the development of the individual project previously exposed.
Temary1. Description of the principal machine learning scenarios. Formalisms and description of the data matrix associated to each scenario. Illustrative applications in each scenario. Supervised classification, clustering, “weakly supervised classification” (“positive unlabeled learning”, “learning from label proportions”, “partial labels”, “crowd learning”, etc.)
2. General purpose techniques and filters for data preprocessing. Software: WEKA
3. Principal techniques for feature selection. Software: WEKA
4. Validation of classification models. Using statistical tests to compare classification models. Software: WEKA, R, webpages
5. The “tm” (text-mining) R package. Construct a “document-term” matrix from a corpus by means of text-mining operators. Notebook-tutorial
6. “The machine learning approach”: based on a previously constructed “document-term” matrix, clustering of terms and classification of document. The “caret” R package. Notebook-tutorial
7. First steps using “deep learning” techniques to classify documents. Application of “word2vec2 techniques. The “h2o” R package. Notebook-tutorial.
Basic bibliography• M. Kuhn, K. Johnson (2013). Applied Predictive Modeling. Springer.
• ParallelDots, online text analysis APIs for several tasks: sentiment analysis, tags' prediction, keyword generator, entity extraction, comparing similarity of texts, different emotions analysis, intent analysis, abusive text prediction, etc. https://www.paralleldots.com/text-analysis-apis
• sentiment140: an interesting project for automatic sentiment categorization of tweets: http://help.sentiment140.com/
• Stanford TreeBank project. "Recursive deep models for semantic compositionality over a semantic treebank". https://nlp.stanford.edu/sentiment/
• RDataMining website: Text mining with R: Twitter data analysis: http://www.rdatamining.com/docs/text-mining-with-r
• Awesome sentiment analysis: A curated list of Sentiment Analysis methods, implementations and misc. https://github.com/xiamx/awesome-sentiment-analysis
• "5 things you need to know about sentiment analysis and classification": https://www.kdnuggets.com/2018/03/5-things-sentiment-analysis-classification.html
• Bing Liu's website on "Opinion mining, sentiment analysis and opinion spam detection: the machine learning approach". https://www.cs.uic.edu/~liub/FBS/sentiment-analysis.html
• 18 NLP key terms, explained for ML practitioners and NLP novices: https://www.kdnuggets.com/2017/02/natural-language-processing-key-terms-explained.html