Welcome to Delicate template
Header
Just another WordPress site
Header

Lot 4 : MODÈLES PRÉDICTIFS PAR APPRENTISSAGE PROFOND (COUCHES CACHÉES)

Tensor Decomposition for Multi-Target Deep-Learning in the context of Predictive Justice

CAP 2022 (Vannes)

Alexandre AUDIBERT⋆, Konstantin USEVICH†, Massih-Reza AMINI⋆, and Marianne CLAUSEL

 

Résumé :

In recent years, deep learning (DL) models for information retrieval have attracted a lot of attention. These models are data-hungry, necessitating large-scale training samples for learning, particularly when the goal is to associate documents with heterogeneous outputs (continuous and discrete); they also lack interpretation. In this paper, we propose to apply tensor decomposition on a DL model to learn with heterogeneous outputs in the context of predictive justice. This strategy makes sense in terms of model interpretation, and it allows us to reduce some layers by a factor of ten without sacrificing performance on the European Court of Human Rights collection.

 

* * *

*

Exploring Semi-supervised Hierarchical Stacked Encoder for Legal Judgement Prediction

legalIR @ ECIR 2023

Nishchal Prasad, Mohand Boughanem, Taoufiq Dkaki

lien papier

 

Résumé :

Predicting the judgment of a legal case from its unannotated case facts is a challenging task. The lengthy and non-uniform document structure poses an even greater challenge in extracting information for decision prediction. In this work, we explore and propose a two-level classification mechanism; both supervised and unsupervised; by using domain-specific pre-trained BERT to extract information from long documents in terms of sentence embeddings further processing with transformer encoder layer and use unsupervised clustering to extract hidden labels from these embeddings to better predict a judgment of a legal case. We conduct several experiments with this mechanism and see higher performance gains than the previously proposed methods on the ILDC dataset. Our experimental results also show the importance of domain-specific pre-training of Transformer Encoders in legal information processing.

 

* * *

*

A Multi-level Encoder-based Architecture for Judgement Prediction of Legal Cases and Their Explanation

SemEval-2023 @ ACL 2023

Nishchal Prasad, Mohand Boughanem, Taoufiq Dkaki

lien papier

 

Résumé :

This paper describes our system used for sub-task C (1 & 2) in Task 6: LegalEval: Understanding Legal Texts. We propose a three-level encoder-based classification architecture that works by fine-tuning a BERT-based pre-trained encoder, and post-processing the embeddings extracted from its last layers, using transformer encoder layers and RNNs. We run ablation studies on the same and analyze itsperformance. To extract the explanations for the predicted class we develop an explanation extraction algorithm, exploiting the idea of a model’s occlusion sensitivity. We explored some training strategies with a detailed analysis of the dataset. Our system ranks 2nd (macro-F1 metric) for its sub-task C-1 and 7th (ROUGE-2 metric) for sub-task C-2.

 

* * *

*

Low-Rank Updates of pre-trained Weights for Multi-Task Learning

ACL 2023 : Findings

Alexandre Audibert, Massih R Amini, Konstantin Usevich, and Marianne Clausel

lien papier